CN112419210A - Underwater image enhancement method based on color correction and three-interval histogram stretching - Google Patents

Underwater image enhancement method based on color correction and three-interval histogram stretching Download PDF

Info

Publication number
CN112419210A
CN112419210A CN202011444565.0A CN202011444565A CN112419210A CN 112419210 A CN112419210 A CN 112419210A CN 202011444565 A CN202011444565 A CN 202011444565A CN 112419210 A CN112419210 A CN 112419210A
Authority
CN
China
Prior art keywords
image
channel
value
pixel
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011444565.0A
Other languages
Chinese (zh)
Other versions
CN112419210B (en
Inventor
张维石
周景春
庞磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202011444565.0A priority Critical patent/CN112419210B/en
Publication of CN112419210A publication Critical patent/CN112419210A/en
Application granted granted Critical
Publication of CN112419210B publication Critical patent/CN112419210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an underwater image enhancement method based on color correction and three-interval histogram stretching. The method comprises the following steps: and performing color correction processing on the source image, processing the source image in R, G, B channels by adopting a three-interval histogram equalization method, stretching the pixel value of a single channel, performing threshold selection, separating three subintervals, and completing three-interval equalization operation to obtain an enhanced image with three-interval histogram equalization. And performing linear weighted fusion on the image subjected to the linear transformation based on the subintervals and the three-interval histogram equalization image, and reconstructing a final defogged image. The invention utilizes a histogram equalization method based on multiple intervals to more accurately divide the single-channel histogram of the source image, performs histogram equalization on a single interval, and simultaneously linearly fuses with the image after color correction processing, so that the dark details of the source image are better displayed, noise is reduced, and image defogging is realized.

Description

Underwater image enhancement method based on color correction and three-interval histogram stretching
Technical Field
The invention relates to the technical field of image processing, in particular to an underwater image enhancement method based on color correction and three-interval histogram stretching.
Background
The development and use of marine resources relies on underwater images, which are typically captured with underwater cameras and underwater robots. Due to the absorption and scattering of light, the underwater images have problems of low contrast, color cast and the like, thereby causing image degradation and making the underwater images difficult to analyze. Common factors affecting the decay rate are water temperature and salinity, and the type and amount of suspended particles in the water. The severe deterioration causes difficulty in recovering image information. Finding an effective solution to recover underwater image color and contrast is therefore a very challenging task.
Research and development have been directed to underwater enhancement techniques to address such problems. The underwater enhancement technology is a simple and quick method, but has great effect on improving the quality of underwater images. The method can improve the image quality by processing the red, green and blue channel intensity value through a specific rule.
Disclosure of Invention
According to the technical problem, the invention provides an underwater image enhancement method based on color correction and three-interval histogram stretching.
The technical means adopted by the invention are as follows: the underwater image enhancement method based on color correction and three-interval histogram stretching is characterized by comprising the following steps of:
step S01: acquiring an original RGB dense fog image; carrying out color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain an enhanced image after color correction;
step S02: decomposing the original RGB dense fog image into R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: stretching the pixel values of the R, G, B channel images to be within the range of 0-255 respectively to obtain stretched single-channel images;
step S04: calculating average pixel values of the R, G, B channel images respectively; subtracting the average pixel value of a single channel from the pixel value of each pixel point to serve as an error, and squaring the variance; selecting the pixel point with the maximum error square according to the error square value of the pixel value of each pixel point and the average pixel value, wherein the left and right of the point are also points with larger difference between the pixel value and the average pixel value, so that the variance of adding or subtracting three times from the left and right of the point is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of the R, G, B channels to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the equalized R, G, B channel image to obtain a final defogging image.
Further, the total pixel value calculation formula of the R, G, B channel in the color correction method based on the subinterval linear transformation is as follows:
Figure BDA0002823888300000021
Figure BDA0002823888300000022
Figure BDA0002823888300000023
where M and N denote the number of rows and columns, respectively, of the input image, IR(i,j)、IG(i,j)、IB(i, j) represent the R, G, B three-channel image pixel values at the (i, j) locations, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
Figure BDA0002823888300000024
Figure BDA0002823888300000025
Figure BDA0002823888300000026
max represents a function of taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; pR,PG,PBR, G, B represents the ratio of the total pixel value to the maximum total pixel value of any one channel respectively; to divide each channel into three intervals, two cut-off ratios are defined
Figure BDA0002823888300000031
And
Figure BDA0002823888300000032
is represented as follows:
Figure BDA0002823888300000033
Figure BDA0002823888300000034
wherein c represents R, G, B any channel, alpha1 and α2Are all constants between 0 and 1, pcRepresenting R, G, B a ratio of a total pixel value of any one channel to a sum of the maximum total pixel values; then, the threshold is cut off
Figure BDA0002823888300000035
And
Figure BDA0002823888300000036
corresponding to two cut-off ratios
Figure BDA0002823888300000037
And
Figure BDA0002823888300000038
determined as equations (9) and (10) according to the following quantile function:
Figure BDA0002823888300000039
Figure BDA00028238883000000310
wherein ,
Figure BDA00028238883000000311
and
Figure BDA00028238883000000312
representing a cut-off threshold, F is a lower quantile function, Ic(x) A pixel value at a point that is one of the three channels R, G, B,
Figure BDA00028238883000000313
and
Figure BDA00028238883000000314
is a cut-off ratio;
to effectively suppress shading and highlight values, the following operations are performed for each color channel:
Figure BDA00028238883000000315
wherein ,
Figure BDA00028238883000000316
representations R, G,B a processed pixel value of a point of any one channel,
Figure BDA00028238883000000317
and
Figure BDA00028238883000000318
denotes the cut-off threshold, Ic(x) R, G, B pixel values of a point of any one of the channels;
finally, the following linear operation is performed on the pixel values of the intermediate region:
Figure BDA00028238883000000319
wherein ,
Figure BDA00028238883000000320
representing the image after the color correction and,
Figure BDA00028238883000000321
representing R, G, B the processed pixel value at any point of any one channel.
Further, the linear stretching operation is performed on the single-channel image in step S03, and each gray value is ensured to be between [0,255], so that the expression of linear stretching is defined as:
Figure BDA00028238883000000322
when c ∈ { R, G, B }, PC(i, j) represents R, G, B the gray value of any channel after the position correction at (i, j); i isC(i, j) indicates R, G, B the gray scale value of any one channel at the (i, j) position; mincR, G, B represents the minimum value of a pixel of any one channel; maxcRepresenting R, G, B the maximum value of a pixel for any one of the channels.
Further, in the step of selecting the threshold and dividing the three regions in step S04, the average pixel value of a single channel is calculated as follows:
Figure BDA0002823888300000041
Figure BDA0002823888300000042
Figure BDA0002823888300000043
wherein Mean isR,MeanG,MeanBRepresenting the average pixel value of R, G, B three channels, respectively, M, N representing the number of rows and columns, respectively, of the input image, IR(i,j),IG(i,j),IB(i, j) respectively representing R, G, B pixel values of the three-channel image at the (i, j) position, wherein M × N represents the total pixel point number of a single channel;
calculating R, G, B an error between the pixel value of any point of one of the three channels of the channel and the average pixel value of the corresponding channel and performing square operation to obtain the square of the error, wherein the calculation formula is as follows:
Figure BDA0002823888300000044
Figure BDA0002823888300000045
wherein ,
Figure BDA0002823888300000046
representing the error between the pixel value of any point in one of the three channels of the R, G, B channels and the average pixel value of the corresponding channel, Ic (i, j) representing the pixel value of R, G, B at the (i, j) position, MeancThe average pixel value of any one channel is represented R, G, B,
Figure BDA0002823888300000047
representing R, G, B the square of the error between the pixel value of any point in one of the three channels of the channel and the average pixel value of the corresponding channel;
selecting a point with the largest error square as a central point through a Max function, and adding and subtracting three times of pixel value variance left and right according to a 3 sigma criterion to further obtain left and right thresholds so as to finish three-interval division;
Figure BDA0002823888300000048
t1=Maxmc-3σ(20)
t2=Maxmc+3σ(21)
wherein ,MaxcRepresenting the maximum squared error, Maxm, of one of the three channels of the R, G, B channelcThe position of the corresponding row, t, representing the squared error maximum of one of the three channels of the R, G, B channel1、t2Respectively, and represents the variance of the pixel value of one of the R, G, B channel three channels.
Further, the histogram equalization processing procedure for the subintervals of each channel in step S05 is as follows:
firstly, dividing the gray scale range of three subintervals according to a threshold value:
[0,255]=[0,t1]∪(t1,t2]∪(t2,255](22)
Figure BDA0002823888300000051
Figure BDA0002823888300000052
Figure BDA0002823888300000053
where I denotes the original image, I (I, j) denotes the gray value of the pixel located in the I-th row and j-th column of the image, X1,X2,X3Respectively representing a first sub-image, a second sub-image and a last sub-image;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X belongs to X1The frequency of the accumulated grey levels of the histogram of the first sub-image from 0 to x is then calculated and expressed as CDF1(x) (ii) a When X belongs to X2Then, the histogram of the second sub-image is calculated from t1Frequency of accumulated gray levels to x and expressed as CDF2(x) (ii) a When X belongs to X3Then, the histogram of the last sub-image is calculated from t2Frequency of accumulated gray levels to x and expressed as CDF3(x);
Then, calculating the transformed gray value of the three sub-images after histogram equalization according to the histogram normalized pixel frequency of each sub-image; obtaining a sub-histogram equalization function by referring to a gray level transformation function of the traditional histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
where a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, cdf (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
Figure BDA0002823888300000054
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t1 and t2Individual watchTwo thresholds representing the division of the sub-histogram, x representing the input gray-level value, CDF1(x)、CDF2(x)、CDF3(x) Respectively representing the frequencies of the accumulated grey levels of the first, second and third sub-histograms.
Still further, the multi-scale fusion comprises the steps of:
step S071: defining an aggregation weight map and obtaining fusion of the input image and the aggregation weight map; the aggregate weight map is determined by three measurement weights, including: contrast weight, saturation weight and exposure weight map;
the contrast weight is a contrast weight map; gray scale images of an input image are used to estimate a global contrast weight W having an absolute valueLaTo ensure the edge and detail texture information of the image;
Figure BDA0002823888300000061
WLa=|La*F|(29)
wherein La represents a laplacian, x represents a convolution, and F represents an input image;
the saturation weight is the standard deviation of each pixel in a channel in the RGB color space and is used as the saturation weight;
Figure BDA0002823888300000062
where, R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at (x, y) positions, Wsa(x, y) represents the saturation weight at the (x, y) position;
the exposure weight map is required to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
Figure BDA0002823888300000063
the aggregation weight map is obtained by multiplying the three characteristic weight maps in multi-scale fusion; contrast weight map WLaSaturation weight map WsaAnd an exposure weight map WEMultiplying the pixel value by the pixel point corresponding to each input image:
Figure BDA0002823888300000064
wherein z represents the input z-th image, WiRepresenting a two-dimensional weight graph;
in order to ensure the consistency of images, a weight map W is introducedz
Figure BDA0002823888300000071
Figure BDA0002823888300000072
Representing an aggregation weight graph;
step S072: fusing the input image and the aggregation weight map; the input image I is decomposed by a Laplacian pyramid and is defined as
Figure BDA0002823888300000073
Aggregate weight graph
Figure BDA0002823888300000074
Is decomposed by Gaussian pyramid and is defined as
Figure BDA0002823888300000075
Wherein trademark l represents the ith weight map; laplacian pyramid
Figure BDA0002823888300000076
And Gaussian pyramid
Figure BDA0002823888300000077
According toThe pixel-by-pixel fusion is performed as follows:
Figure BDA0002823888300000078
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
Compared with the prior art, the invention has the following advantages:
1. the color correction method based on the linear transformation of the subintervals better improves the visibility, achieves a good color correction effect, enables the histogram distribution of red, green and blue channels to be more uniform, better solves the color cast problem of the underwater image, and improves the details of the dark part of the underwater image.
2. The invention utilizes the histogram equalization method of the three intervals, effectively improves the contrast of the image, obtains good effect on enhancing the bright part details of the image and completes the effective stretching of the image histogram.
3. According to the invention, through multi-scale linear fusion, the image with improved color cast and dark details and the image with improved contrast and bright details through a three-interval histogram equalization method are fused, so that the underwater image is effectively enhanced.
For the above reasons, the present invention can be widely applied to the fields of image processing and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a comparison graph of the enhancement effect of the invention on underwater scene images compared with other algorithms. Wherein, FIG. 2-1-1 is a result graph after being processed by the HEEF algorithm; FIG. 2-1-2 is a graph of results after processing by the BBHE algorithm; FIGS. 2-1-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-1-4 are graphs of results after being processed by the algorithm herein; FIG. 2-2-1 is a graph of results after being processed by the HEEF algorithm; FIG. 2-2-2 is a graph of results after processing by the BBHE algorithm; 2-2-3 are graphs of results after being processed by the DOTHE algorithm; FIGS. 2-2-4 are graphs of results after being processed by the algorithm herein; FIG. 2-3-1 is a graph of results after being processed by the HEEF algorithm; FIG. 2-3-2 is a graph of results after processing by the BBHE algorithm; FIGS. 2-3-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-3-4 are graphs of results after being processed by the algorithm herein; FIG. 2-4-1 is a graph of results after being processed by the HEEF algorithm; 2-4-2 are graphs of results after processing by the BBHE algorithm; FIGS. 2-4-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-4-4 are graphs of results after processing by the algorithm herein.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1-2, the present invention provides an underwater image enhancement method based on color correction and three-interval histogram stretching, comprising the following steps:
step S01: acquiring an original RGB dense fog image; carrying out color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain an enhanced image after color correction;
step S02: decomposing the original RGB dense fog image into R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: stretching the pixel values of the R, G, B channel images to be within the range of 0-255 respectively to obtain stretched single-channel images;
step S04: calculating average pixel values of the R, G, B channel images respectively; subtracting the average pixel value of a single channel from the pixel value of each pixel point to serve as an error, and squaring the variance; selecting the pixel point with the maximum error square according to the error square value of the pixel value of each pixel point and the average pixel value, wherein the left and right of the point are also points with larger difference between the pixel value and the average pixel value, so that the variance of adding or subtracting three times from the left and right of the point is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of the R, G, B channels to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the equalized R, G, B channel image to obtain a final defogging image.
As a preferred embodiment, the total pixel value calculation formula of the R, G, B channel in the color correction method based on the subinterval linear transformation is as follows:
Figure BDA0002823888300000091
Figure BDA0002823888300000092
Figure BDA0002823888300000093
where M and N denote the number of rows and columns, respectively, of the input image, IR(i,j)、IG(i,j)、IB(i, j) represent the R, G, B three-channel image pixel values at the (i, j) locations, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
Figure BDA0002823888300000094
Figure BDA0002823888300000101
Figure BDA0002823888300000102
max represents a function of taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; pR,PG,PBR, G, B represents the ratio of the total pixel value to the maximum total pixel value of any one channel respectively; to divide each channel into three intervals, two cut-off ratios are defined
Figure BDA0002823888300000103
And
Figure BDA0002823888300000104
is represented as follows:
Figure BDA0002823888300000105
Figure BDA0002823888300000106
wherein c represents R, G, B any channel, alpha1 and α2Are all constants between 0 and 1, pcRepresenting R, G, B a ratio of a total pixel value of any one channel to a sum of the maximum total pixel values; then, the threshold is cut off
Figure BDA0002823888300000107
And
Figure BDA0002823888300000108
corresponding to two cut-off ratios
Figure BDA0002823888300000109
And
Figure BDA00028238883000001010
determined as equations (9) and (10) according to the following quantile function:
Figure BDA00028238883000001011
Figure BDA00028238883000001012
wherein ,
Figure BDA00028238883000001013
and
Figure BDA00028238883000001014
representing a cut-off threshold, F is a lower quantile function, Ic(x) A pixel value at a point that is one of the three channels R, G, B,
Figure BDA00028238883000001015
and
Figure BDA00028238883000001016
is a cut-off ratio;
to effectively suppress shading and highlight values, the following operations are performed for each color channel:
Figure BDA00028238883000001017
wherein ,
Figure BDA00028238883000001018
representing R, G, B the processed pixel value at a point of any one of the channels,
Figure BDA00028238883000001019
and
Figure BDA00028238883000001020
denotes the cut-off threshold, Ic(x) R, G, B pixel values of a point of any one of the channels;
finally, the following linear operation is performed on the pixel values of the intermediate region:
Figure BDA00028238883000001021
wherein ,
Figure BDA00028238883000001022
representing the image after the color correction and,
Figure BDA00028238883000001023
representing R, G, B the processed pixel value at any point of any one channel.
As a preferred embodiment, in the present application, the linear stretching operation is performed on the single-channel image in step S03, and each gray value is ensured to be between [0,255], so that the expression of linear stretching is defined as:
Figure BDA0002823888300000111
when c ∈ { R, G, B }, PC(i, j) represents R, G, B the gray value of any channel after the position correction at (i, j); i isC(i, j) indicates R, G, B the gray scale value of any one channel at the (i, j) position; mincR, G, B represents the minimum value of a pixel of any one channel; maxcRepresenting R, G, B the maximum value of a pixel for any one of the channels.
Further, in the step of selecting the threshold and dividing the three regions in step S04, the average pixel value of a single channel is calculated as follows:
Figure BDA0002823888300000112
Figure BDA0002823888300000113
Figure BDA0002823888300000114
wherein Mean isR,MeanG,MeanBRepresenting the average pixel value of R, G, B three channels, respectively, M, N representing the number of rows and columns, respectively, of the input image, IR(i,j),IG(i,j),IB(i, j) respectively representing R, G, B pixel values of the three-channel image at the (i, j) position, wherein M × N represents the total pixel point number of a single channel;
calculating R, G, B an error between the pixel value of any point of one of the three channels of the channel and the average pixel value of the corresponding channel and performing square operation to obtain the square of the error, wherein the calculation formula is as follows:
Figure BDA0002823888300000115
Figure BDA0002823888300000116
wherein ,
Figure BDA0002823888300000117
representing the error between the pixel value of any point in one of the three channels of the R, G, B channels and the average pixel value of the corresponding channel, Ic (i, j) representing the pixel value of R, G, B at the (i, j) position, MeancThe average pixel value of any one channel is represented R, G, B,
Figure BDA0002823888300000118
representing R, G, B the square of the error between the pixel value of any point in one of the three channels of the channel and the average pixel value of the corresponding channel;
selecting a point with the largest error square as a central point through a Max function, and adding and subtracting three times of pixel value variance left and right according to a 3 sigma criterion to further obtain left and right thresholds so as to finish three-interval division;
Figure BDA0002823888300000121
t1=Maxmc-3σ(20)
t2=Maxmc+3σ(21)
wherein ,MaxcRepresenting the maximum squared error, Maxm, of one of the three channels of the R, G, B channelcThe position of the corresponding row, t, representing the squared error maximum of one of the three channels of the R, G, B channel1、t2Respectively, and represents the variance of the pixel value of one of the R, G, B channel three channels.
Further, the histogram equalization processing procedure for the subintervals of each channel in step S05 is as follows:
firstly, dividing the gray scale range of three subintervals according to a threshold value:
[0,255]=[0,t1]∪(t1,t2]∪(t2,255](22)
Figure BDA0002823888300000122
Figure BDA0002823888300000123
Figure BDA0002823888300000124
where I denotes the original image, I (I, j) denotes the gray value of the pixel located in the I-th row and j-th column of the image, X1,X2,X3Respectively representing a first sub-image, a second sub-image and a last sub-image;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X belongs to X1The frequency of the accumulated grey levels of the histogram of the first sub-image from 0 to x is then calculated and expressed as CDF1(x) (ii) a When X belongs to X2Then, the histogram of the second sub-image is calculated from t1Frequency of accumulated gray levels to x and expressed as CDF2(x) (ii) a When X belongs to X3Then, the histogram of the last sub-image is calculated from t2Frequency of accumulated gray levels to x and expressed as CDF3(x);
Then, calculating the transformed gray value of the three sub-images after histogram equalization according to the histogram normalized pixel frequency of each sub-image; obtaining a sub-histogram equalization function by referring to a gray level transformation function of the traditional histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
where a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, cdf (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
Figure BDA0002823888300000131
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t1 and t2Two thresholds respectively representing the division sub-histograms, x representing the input gray value, CDF1(x)、CDF2(x)、CDF3(x) Respectively representing the frequencies of the accumulated grey levels of the first, second and third sub-histograms.
Still further, the multi-scale fusion comprises the steps of:
step S071: defining an aggregation weight map and obtaining fusion of the input image and the aggregation weight map; the aggregate weight map is determined by three measurement weights, including: contrast weight, saturation weight and exposure weight map;
the contrast weight is a contrast weight map; gray scale images of an input image are used to estimate a global contrast weight W having an absolute valueLaTo ensure the edge and detail texture information of the image;
Figure BDA0002823888300000132
WLa=|La*F|(29)
wherein La represents a laplacian, x represents a convolution, and F represents an input image;
the saturation weight is the standard deviation of each pixel in a channel in the RGB color space and is used as the saturation weight;
Figure BDA0002823888300000133
where, R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at (x, y) positions, Wsa(x, y) represents the saturation weight at the (x, y) position;
the exposure weight map is required to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
Figure BDA0002823888300000141
the aggregation weight map is obtained by multiplying the three characteristic weight maps in multi-scale fusion; contrast weight map WLaSaturation weight map WsaAnd an exposure weight map WEMultiplying the pixel value by the pixel point corresponding to each input image:
Figure BDA0002823888300000142
wherein z represents the input z-th image, WiRepresenting a two-dimensional weight graph;
in order to ensure the consistency of images, a weight map W is introducedz
Figure BDA0002823888300000143
Figure BDA0002823888300000144
Representing an aggregation weight graph;
step S072: fusing the input image and the aggregation weight map; the input image I is decomposed by Laplacian pyramid and determinedIs defined as
Figure BDA0002823888300000145
Aggregate weight graph
Figure BDA0002823888300000146
Is decomposed by Gaussian pyramid and is defined as
Figure BDA0002823888300000147
Wherein trademark l represents the ith weight map; laplacian pyramid
Figure BDA0002823888300000148
And Gaussian pyramid
Figure BDA0002823888300000149
The pixel-by-pixel fusion is performed as follows:
Figure BDA00028238883000001410
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
Example 1
As shown in fig. 2, first, a first greenish image is processed by various algorithms; fig. 2-1-1 is a result graph after the treatment of the HEEF algorithm, and it can be seen that the result graph is greenish in whole and slightly blurred in details, and the expected effect is not achieved. Fig. 2-1-2 is a result graph after processing by the BBHE algorithm, and it can be seen that the result graph is still greenish as a whole, the details are unclear, and the effect is not good. 2-1-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the overall green color of the result graph is reduced, the details are partially improved, but the image is over-exposed, so that the brightness of the image is too high, the details are lost, and the expected effect is not achieved. 2-1-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the detail is obviously improved, the contrast is improved, and the image enhancement is successful. Firstly, processing a first graph through a plurality of algorithms; and then processing a second greenish image, wherein a result graph processed by the HEEF algorithm is shown in figure 2-2-1, and it can be seen that the problem that the whole greenish image of the result graph is not solved, the details are slightly blurred, and the effect is poor. And 2-2-2 is a result graph after BBHE algorithm processing, and it can be seen that the result graph is green overall and unclear in detail, and the expected effect is not achieved. 2-2-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the color cast problem of the result graph is partially improved, the details are partially improved, but the image is over-exposed, which causes the image brightness to be too high, the image details are lost, and the expected effect is not achieved. 2-2-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the detail is obviously improved, the contrast is improved, the improvement is huge compared with the result graph, and the image enhancement is more successful.
And then processing the third blueish image, wherein a result graph processed by the HEEF algorithm is shown in figure 2-3-1, and it can be seen that the problem that the whole blueish result graph is not well improved, partial details are fuzzy, and the effect is poor. Fig. 2-3-2 is a result graph after processing by the BBHE algorithm, and it can be seen that the result graph is still blue in whole, details are not clear, and the effect is not good. 2-3-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the color cast problem of the result graph is partially improved, the details are partially improved, but the image is over-exposed, which causes the image brightness to be too high, the image details are lost, and the expected effect is not achieved. 2-3-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the details are greatly improved, the contrast is improved, the improvement is great compared with the result graph, and the image enhancement is successful. And then processing the fourth bluing image, and as shown in fig. 2-4-1, the result graph after the treatment by the HEEF algorithm shows that the problem that the whole bluing image of the result graph is not well improved, part of details are blurred, and the effect is poor. Fig. 2-4-2 are result graphs after processing by the BBHE algorithm, and it can be seen that the result graphs are still blue in whole, details are not clear, and the effect is not good. 2-4-3 are the result graphs after the DOTHE algorithm processing, it can be seen that the color cast problem of the result graph is partially improved, the details are partially improved, but the image is over-exposed, which results in too high brightness of the image, and the image details are lost, and the expected effect is not achieved. 2-4-4 are result graphs after being processed by the algorithm, and it can be seen that the color cast problem of the result graph is solved, the details are greatly improved, the contrast is improved, the improvement is great compared with the result graph, and the image enhancement is successful. It can be observed that the first three algorithms are insufficient in color correction, contrast and detail, and the algorithm proposed herein corrects colors, improves contrast and highlights details for underwater images; the colors reflecting the degraded image are well corrected.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. The underwater image enhancement method based on color correction and three-interval histogram stretching is characterized by comprising the following steps of:
step S01: acquiring an original RGB dense fog image; carrying out color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain an enhanced image after color correction;
step S02: decomposing the original RGB dense fog image into R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: stretching the pixel values of the R, G, B channel images to be within the range of 0-255 respectively to obtain stretched single-channel images;
step S04: calculating average pixel values of the R, G, B channel images respectively; subtracting the average pixel value of a single channel from the pixel value of each pixel point to serve as an error, and squaring the variance; selecting the pixel point with the maximum error square according to the error square value of the pixel value of each pixel point and the average pixel value, wherein the left and right of the point are also points with larger difference between the pixel value and the average pixel value, so that the variance of adding or subtracting three times from the left and right of the point is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of the R, G, B channels to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the equalized R, G, B channel image to obtain a final defogging image.
2. The color correction and three-bin histogram stretching-based underwater image enhancement method according to claim 1, wherein the total pixel value calculation formula of R, G, B channels in the color correction method based on sub-bin linear transformation is as follows:
Figure FDA0002823888290000011
Figure FDA0002823888290000012
Figure FDA0002823888290000013
where M and N represent the number of rows and columns of the input image, respectively,IR(i,j)、IG(i,j)、IB(i, j) represent the R, G, B three-channel image pixel values at the (i, j) locations, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
Figure FDA0002823888290000021
Figure FDA0002823888290000022
Figure FDA0002823888290000023
max represents a function of taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; pR,PG,PBR, G, B represents the ratio of the total pixel value to the maximum total pixel value of any one channel respectively; to divide each channel into three intervals, two cut-off ratios are defined
Figure FDA0002823888290000024
And
Figure FDA0002823888290000025
is represented as follows:
Figure FDA0002823888290000026
Figure FDA0002823888290000027
wherein c represents R, G, B any channel, alpha1 and α2Are all constants between 0 and 1, pcRepresenting R, G, B a ratio of a total pixel value of any one channel to a sum of the maximum total pixel values; then, the threshold is cut off
Figure FDA0002823888290000028
And
Figure FDA0002823888290000029
corresponding to two cut-off ratios
Figure FDA00028238882900000210
And
Figure FDA00028238882900000211
determined as equations (9) and (10) according to the following quantile function:
Figure FDA00028238882900000212
Figure FDA00028238882900000213
wherein ,
Figure FDA00028238882900000214
and
Figure FDA00028238882900000215
representing a cut-off threshold, F is a lower quantile function, Ic(x) A pixel value at a point that is one of the three channels R, G, B,
Figure FDA00028238882900000216
and
Figure FDA00028238882900000217
is a cut-off ratio;
to effectively suppress shading and highlight values, the following operations are performed for each color channel:
Figure FDA00028238882900000218
wherein ,
Figure FDA00028238882900000219
representing R, G, B the processed pixel value at a point of any one of the channels,
Figure FDA00028238882900000220
and
Figure FDA00028238882900000221
denotes the cut-off threshold, Ic(x) R, G, B pixel values of a point of any one of the channels;
finally, the following linear operation is performed on the pixel values of the intermediate region:
Figure FDA00028238882900000222
wherein ,
Figure FDA00028238882900000223
representing the image after the color correction and,
Figure FDA00028238882900000224
representing R, G, B the processed pixel value at any point of any one channel.
3. The underwater image enhancement method based on color correction and three-interval histogram stretching according to claim 1, wherein the step S03 is to perform linear stretching operation on the single-channel image and ensure that each gray value is between [0,255], therefore, the expression of linear stretching is defined as:
Figure FDA0002823888290000031
when c ∈ { R, G, B }, PC(i, j) represents R, G, B the gray value of any channel after the position correction at (i, j); i isC(i, j) indicates R, G, B the gray scale value of any one channel at the (i, j) position; mincR, G, B represents the minimum value of a pixel of any one channel; maxcRepresenting R, G, B the maximum value of a pixel for any one of the channels.
4. The method for underwater image enhancement based on color correction and three-interval histogram stretching according to claim 1, wherein the steps of threshold selection and three-interval division in step S04 are characterized by firstly calculating the pixel average value of a single channel as follows:
Figure FDA0002823888290000032
Figure FDA0002823888290000033
Figure FDA0002823888290000034
wherein Mean isR,MeanG,MeanBRepresenting the average pixel value of R, G, B three channels, respectively, M, N representing the number of rows and columns, respectively, of the input image, IR(i,j),IG(i,j),IB(i, j) respectively representing R, G, B pixel values of the three-channel image at the (i, j) position, wherein M × N represents the total pixel point number of a single channel;
calculating R, G, B an error between the pixel value of any point of one of the three channels of the channel and the average pixel value of the corresponding channel and performing square operation to obtain the square of the error, wherein the calculation formula is as follows:
Figure FDA0002823888290000035
Figure FDA0002823888290000036
wherein ,
Figure FDA0002823888290000037
representing the error between the pixel value of any point in one of the three channels of the R, G, B channels and the average pixel value of the corresponding channel, Ic (i, j) representing the pixel value of R, G, B at the (i, j) position, MeancThe average pixel value of any one channel is represented R, G, B,
Figure FDA0002823888290000038
representing R, G, B the square of the error between the pixel value of any point in one of the three channels of the channel and the average pixel value of the corresponding channel;
selecting a point with the largest error square as a central point through a Max function, and adding and subtracting three times of pixel value variance left and right according to a 3 sigma criterion to further obtain left and right thresholds so as to finish three-interval division;
Figure FDA0002823888290000041
t1=Maxmc-3σ(20)
t2=Maxmc+3σ(21)
wherein ,MaxcRepresenting the maximum squared error, Maxm, of one of the three channels of the R, G, B channelcThe position of the corresponding row, t, representing the squared error maximum of one of the three channels of the R, G, B channel1、t2Respectively, and represents the variance of the pixel value of one of the R, G, B channel three channels.
5. The underwater image enhancement method based on color correction and three-interval histogram stretching according to claim 1, wherein the histogram equalization processing procedure for the sub-intervals of each channel in the step S05 is as follows:
firstly, dividing the gray scale range of three subintervals according to a threshold value:
[0,255]=[0,t1]∪(t1,t2]∪(t2,255](22)
Figure FDA0002823888290000042
Figure FDA0002823888290000043
Figure FDA0002823888290000044
where I denotes the original image, I (I, j) denotes the gray value of the pixel located in the I-th row and j-th column of the image, X1,X2,X3Respectively representing a first sub-image, a second sub-image and a last sub-image;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X belongs to X1The frequency of the accumulated grey levels of the histogram of the first sub-image from 0 to x is then calculated and expressed as CDF1(x) (ii) a When X belongs to X2Then, the histogram of the second sub-image is calculated from t1Frequency of accumulated gray levels to x and expressed as CDF2(x) (ii) a When X belongs to X3Then, the histogram of the last sub-image is calculatedt2Frequency of accumulated gray levels to x and expressed as CDF3(x);
Then, calculating the transformed gray value of the three sub-images after histogram equalization according to the histogram normalized pixel frequency of each sub-image; obtaining a sub-histogram equalization function by referring to a gray level transformation function of the traditional histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
where a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, cdf (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
Figure FDA0002823888290000051
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t1 and t2Two thresholds respectively representing the division sub-histograms, x representing the input gray value, CDF1(x)、CDF2(x)、CDF3(x) Respectively representing the frequencies of the accumulated grey levels of the first, second and third sub-histograms.
6. The underwater image enhancement method based on color correction and three-interval histogram stretching according to claim 1,
the multi-scale fusion comprises the following steps:
step S071: defining an aggregation weight map and obtaining fusion of the input image and the aggregation weight map; the aggregate weight map is determined by three measurement weights, including: contrast weight, saturation weight and exposure weight map;
the contrast weight is a contrast weight map; gray scale images of an input image are used to estimate a global contrast weight W having an absolute valueLaTo ensure the edge and detail of the imageTexture information;
Figure FDA0002823888290000052
WLa=|La*F|(29)
wherein La represents a laplacian, x represents a convolution, and F represents an input image;
the saturation weight is the standard deviation of each pixel in a channel in the RGB color space and is used as the saturation weight;
Figure FDA0002823888290000053
where, R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at (x, y) positions, Wsa(x, y) represents the saturation weight at the (x, y) position;
the exposure weight map is required to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
Figure FDA0002823888290000061
the aggregation weight map is obtained by multiplying the three characteristic weight maps in multi-scale fusion; contrast weight map WLaSaturation weight map WsaAnd an exposure weight map WEMultiplying the pixel value by the pixel point corresponding to each input image:
WZ=WLaz×WSaz×WEz(32)
wherein z represents the input z-th image, WiRepresenting a two-dimensional weight graph;
in order to ensure the consistency of images, a weight map W is introducedz
Figure FDA0002823888290000062
Figure FDA0002823888290000063
Representing an aggregation weight graph;
step S072: fusing the input image and the aggregation weight map; the input image I is decomposed by a Laplacian pyramid and is defined as
Figure FDA0002823888290000064
Aggregate weight graph
Figure FDA0002823888290000065
Is decomposed by Gaussian pyramid and is defined as
Figure FDA0002823888290000066
Wherein trademark l represents the ith weight map; laplacian pyramid
Figure FDA0002823888290000067
And Gaussian pyramid
Figure FDA0002823888290000068
The pixel-by-pixel fusion is performed as follows:
Figure FDA0002823888290000069
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
CN202011444565.0A 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching Active CN112419210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011444565.0A CN112419210B (en) 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011444565.0A CN112419210B (en) 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching

Publications (2)

Publication Number Publication Date
CN112419210A true CN112419210A (en) 2021-02-26
CN112419210B CN112419210B (en) 2023-09-22

Family

ID=74775554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011444565.0A Active CN112419210B (en) 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching

Country Status (1)

Country Link
CN (1) CN112419210B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114445300A (en) * 2022-01-29 2022-05-06 赵恒� Nonlinear underwater image gain algorithm for hyperbolic tangent deformation function transformation
CN114494084A (en) * 2022-04-14 2022-05-13 广东欧谱曼迪科技有限公司 Image color homogenizing method and device, electronic equipment and storage medium
WO2023130547A1 (en) * 2022-01-06 2023-07-13 广东欧谱曼迪科技有限公司 Endoscopic image dehazing method and apparatus, electronic device, and storage medium
CN117078561A (en) * 2023-10-13 2023-11-17 深圳市东视电子有限公司 RGB-based self-adaptive color correction and contrast enhancement method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169102A1 (en) * 2007-11-29 2009-07-02 Chao Zhang Multi-scale multi-camera adaptive fusion with contrast normalization
CN111127359A (en) * 2019-12-19 2020-05-08 大连海事大学 Underwater image enhancement method based on selective compensation color and three-interval balance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169102A1 (en) * 2007-11-29 2009-07-02 Chao Zhang Multi-scale multi-camera adaptive fusion with contrast normalization
CN111127359A (en) * 2019-12-19 2020-05-08 大连海事大学 Underwater image enhancement method based on selective compensation color and three-interval balance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于君霞;: "基于自适应动态限幅的水下图像增强算法改进", 西部皮革, no. 22 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130547A1 (en) * 2022-01-06 2023-07-13 广东欧谱曼迪科技有限公司 Endoscopic image dehazing method and apparatus, electronic device, and storage medium
CN114445300A (en) * 2022-01-29 2022-05-06 赵恒� Nonlinear underwater image gain algorithm for hyperbolic tangent deformation function transformation
CN114494084A (en) * 2022-04-14 2022-05-13 广东欧谱曼迪科技有限公司 Image color homogenizing method and device, electronic equipment and storage medium
CN114494084B (en) * 2022-04-14 2022-07-26 广东欧谱曼迪科技有限公司 Image color homogenizing method and device, electronic equipment and storage medium
CN117078561A (en) * 2023-10-13 2023-11-17 深圳市东视电子有限公司 RGB-based self-adaptive color correction and contrast enhancement method and device
CN117078561B (en) * 2023-10-13 2024-01-19 深圳市东视电子有限公司 RGB-based self-adaptive color correction and contrast enhancement method and device

Also Published As

Publication number Publication date
CN112419210B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
Zheng et al. Image dehazing by an artificial image fusion method based on adaptive structure decomposition
CN112419210A (en) Underwater image enhancement method based on color correction and three-interval histogram stretching
CN110175964B (en) Retinex image enhancement method based on Laplacian pyramid
CN108876743B (en) Image rapid defogging method, system, terminal and storage medium
Jiang et al. Image dehazing using adaptive bi-channel priors on superpixels
US9230304B2 (en) Apparatus and method for enhancing image using color channel
CN112288658A (en) Underwater image enhancement method based on multi-residual joint learning
Li et al. Color correction based on cfa and enhancement based on retinex with dense pixels for underwater images
CN106485668A (en) Mthods, systems and devices for overexposure correction
CN109064423B (en) Intelligent image repairing method for generating antagonistic loss based on asymmetric circulation
CN106846263A (en) The image defogging method being immunized based on fusion passage and to sky
Chen et al. Hazy image restoration by bi-histogram modification
CN105894484A (en) HDR reconstructing algorithm based on histogram normalization and superpixel segmentation
CN111861901A (en) Edge generation image restoration method based on GAN network
CN112085673A (en) Multi-exposure image fusion method for removing strong ghost
Liu et al. Image contrast enhancement based on intensity expansion-compression
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN113284061B (en) Underwater image enhancement method based on gradient network
Steffens et al. Deep learning based exposure correction for image exposure correction with application in computer vision for robotics
CN105608683A (en) Defogging method of single image
Dixit et al. Image Contrast Optimization using Local Color Correction and Fuzzy Intensification
CN116563133A (en) Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion
Srigowri Enhancing unpaired underwater images with cycle consistent network
Kaur et al. A novel hybrid technique for low exposure image enhancement using sub-imge histogram equilization and artificial neural network
Filin et al. Haze removal method based on joint transmission map estimation and atmospheric-light extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant