CN112102175A - Image contrast enhancement method and device, storage medium and electronic equipment - Google Patents
Image contrast enhancement method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN112102175A CN112102175A CN201910526097.2A CN201910526097A CN112102175A CN 112102175 A CN112102175 A CN 112102175A CN 201910526097 A CN201910526097 A CN 201910526097A CN 112102175 A CN112102175 A CN 112102175A
- Authority
- CN
- China
- Prior art keywords
- image
- contrast
- scene
- source image
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 89
- 238000013145 classification model Methods 0.000 claims abstract description 42
- 230000002708 enhancing effect Effects 0.000 claims abstract description 42
- 238000013507 mapping Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims description 72
- 238000006243 chemical reaction Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 10
- 239000002609 medium Substances 0.000 description 18
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 6
- 230000004913 activation Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000010606 normalization Methods 0.000 description 4
- 238000013136 deep learning model Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000010200 validation analysis Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005282 brightening Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 210000004205 output neuron Anatomy 0.000 description 2
- 230000035699 permeability Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000012120 mounting media Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses a method and a device for enhancing image contrast, a storage medium and electronic equipment. The method comprises the following steps: determining image characteristics of a source image; inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model; and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm. By executing the technical scheme, the effect of accurately enhancing the contrast of the image can be respectively realized for scenes with poor contrast.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method and a device for enhancing image contrast, a storage medium and an electronic device.
Background
Contrast enhancement plays a very critical role in improving image video quality, and is widely applied to the fields of computer vision, pattern recognition and digital image processing. Due to the influence of factors such as imaging equipment and imaging illumination conditions, the actual image generally has the problems of poor contrast, unobvious local detail information of the target and the like, which influence the fine identification capability of human eyes on the target or the automatic identification capability of a machine, and in practical application, the image contrast enhancement technology is generally adopted to improve the visual effect of the image.
The existing contrast enhancement algorithm can be divided into a global algorithm and a local algorithm according to different processing strategies. Common methods of the global algorithm include histogram equalization, gamma conversion, piecewise linear conversion and the like, and the mapping adjustment is performed on pixel data through a conversion function. Local algorithms are commonly referred to as local histogram equalization algorithms and the like, and neighborhood statistical information is referred for processing. The global algorithm has the advantages of simplicity and convenience in implementation, but the enhanced objects are not selective, the enhancement degree is not well controlled, and various scenes with poor contrast cannot be accurately enhanced. The local algorithm has the advantages of good self-adaptability to the local processing of the image, but has a general effect of improving the overall contrast of the image.
Disclosure of Invention
The embodiment of the application provides an image contrast enhancement method and device, a storage medium and electronic equipment, which can respectively realize the effect of accurately enhancing the contrast of an image aiming at different scenes with poor contrast.
In a first aspect, an embodiment of the present application provides a method for enhancing image contrast, where the method includes:
determining image characteristics of a source image;
inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model;
and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm.
Further, determining image features of the source image, comprising:
extracting basic image features of a source image, wherein the basic image features comprise at least one of a brightness histogram, a gradient histogram and a high-order derivative distribution;
and carrying out format conversion on the basic image characteristics according to the input data format of the scene classification model.
Further, the scene category of the source image comprises at least one of the following: a low brightness low contrast image, a medium brightness low contrast image, a high brightness low contrast image, and a high contrast image.
Further, the output result includes a candidate scene category number and a probability value corresponding to the candidate scene category number;
correspondingly, the determining the scene classification of the source image according to the output result of the scene classification model comprises:
and determining the candidate scene category with the maximum probability value corresponding to the candidate scene category number as the scene category of the source image.
Further, according to the scene category of the source image and the mapping relationship between the candidate scene category and the candidate contrast enhancement algorithm, the contrast of the source image is enhanced, and the method comprises the following steps:
determining a target contrast enhancement algorithm of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm;
performing primary enhancement processing on the contrast of the source image by adopting a target contrast enhancement algorithm of the source image;
and adjusting the result of the primary enhancement processing according to the probability value of the scene category to obtain a final enhancement processing result.
Further, according to the probability value of the scene category, the result of the preliminary enhancement processing is adjusted to obtain a final enhancement processing result, including:
calculating the final enhancement processing result by the following formula:
Out=S·O(x,y)+(1-S)·I(x,y);
wherein Out is the final enhancement processing result, O (x, y) is the preliminary enhancement processing result, S is the probability value of the scene category, and I (x, y) is the source image.
In a second aspect, an embodiment of the present application provides an apparatus for enhancing image contrast, the apparatus including:
the image characteristic determining module is used for determining the image characteristics of the source image;
the scene classification module is used for inputting the image characteristics of the source image into a scene classification model and determining the scene category of the source image according to the output result of the scene classification model;
and the contrast enhancement processing module is used for enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm.
Further, the image feature determination module includes:
a basic image feature extraction unit, configured to extract basic image features of a source image, where the basic image features include at least one of a luminance histogram, a gradient histogram, and a higher-order derivative distribution;
and the format conversion unit is used for carrying out format conversion on the basic image characteristics according to the input data format of the scene classification model.
Further, the scene category of the source image comprises at least one of the following: a low brightness low contrast image, a medium brightness low contrast image, a high brightness low contrast image, and a high contrast image.
Further, the output result includes a candidate scene category number and a probability value corresponding to the candidate scene category number;
correspondingly, the scene classification module comprises a scene category determination unit configured to:
and determining the candidate scene category with the maximum probability value corresponding to the candidate scene category number as the scene category of the source image.
Further, the contrast enhancement processing module includes:
the target algorithm determining unit is used for determining a target contrast enhancement algorithm of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm;
the primary enhancement processing unit is used for carrying out primary enhancement processing on the contrast of the source image by adopting a target contrast enhancement algorithm of the source image;
and the adjusting unit is used for adjusting the result of the primary enhancement processing according to the probability value of the scene category to obtain a final enhancement processing result.
Further, the adjusting unit is specifically configured to:
calculating the final enhancement processing result by the following formula:
Out=S·O(x,y)+(1-S)·I(x,y);
wherein Out is the final enhancement processing result, O (x, y) is the preliminary enhancement processing result, S is the probability value of the scene category, and I (x, y) is the source image.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method for enhancing image contrast according to the present application.
In a fourth aspect, the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable by the processor, where the processor executes the computer program to implement the method for enhancing image contrast according to the embodiment of the present application.
According to the technical scheme provided by the embodiment of the application, the image characteristics of the source image are determined; inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model; and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm. By adopting the technical scheme provided by the application, the effect of accurately enhancing the contrast of the image can be respectively realized aiming at the scenes with poor contrast.
Drawings
FIG. 1 is a schematic diagram of four typical poor-contrast scene types provided by an embodiment of the present application;
fig. 2 is a flowchart of a method for enhancing image contrast according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a feature combination provided in an embodiment of the present application;
fig. 4 is a schematic diagram of a framework of a scene separation model provided in the second embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus for enhancing image contrast provided in the third embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Contrast can describe the magnitude of the contrast between different layers of bright and dark regions in an image, with greater contrast representing higher image contrast and less contrast representing lower image contrast. The contrast can directly affect the visual effect of the image, and generally, the higher the contrast is, the clearer and more thorough the image is; the lower the contrast, the poorer the image quality, and the less striking the visual effect. The high contrast can increase the permeability of the image, and can well improve the detail expression and the gray level expression of the image, but the contrast is too high, and the visual effect of the image can be reduced. Thus, poor or high image contrast results.
According to the average brightness and the contrast of the images, the scene images with poor contrast are mainly divided into four types, namely low-brightness low-contrast images, medium-brightness low-contrast images, high-brightness low-contrast images and high-contrast images. Fig. 1 is a schematic diagram of four typical poor-contrast scene types provided in this embodiment of the present application. As shown in fig. 1, the low-brightness low-contrast image is dark as a whole, most of the pixels are distributed in a lower gray scale range in a concentrated manner, the difference between the brightness levels is small, and the image quality is poor; the image with medium brightness and low contrast has the advantages that most of pixels are distributed and concentrated in the middle gray scale range, the difference between brightness levels is small, the visual effect of the image is poor, and the fog sense is strong; the high-brightness low-contrast image is bright overall, most of pixels are concentrated in a range with higher gray scale, the difference between brightness levels is small, and the image quality is poor; the high-contrast image has large-area bright regions and dark regions, the overall contrast is too high, and details in the bright regions and the dark regions are seriously lost.
The contrast enhancement algorithm aims to stretch or compress the brightness value range in the image into the brightness display range specified by the display system, so that the global or local contrast of the image is improved, the detail expression, the gray level expression and the like of the local area of the image are enhanced, and the visual effect of the image is finally improved and is clearer and more thorough. The existing contrast enhancement algorithms can be divided into global and local algorithms according to different processing strategies. Common global algorithms include histogram equalization, gamma conversion, piecewise linear conversion and the like, and the mapping adjustment is performed on pixel data through a conversion function. Common local algorithms include a local histogram equalization algorithm and the like, and neighborhood statistical information is referred for processing. The global method has the advantages of simplicity and convenience in implementation, but the enhanced objects are not selective, the enhancement degree is not well controlled, and various scenes with poor contrast cannot be accurately enhanced. The local method has the advantages of good self-adaptability to the local processing of the image, but has a general effect of improving the overall contrast of the image. Moreover, the current global and local algorithms are only one algorithm, and more or less limitation of scene adaptability exists.
Aiming at the problems, the invention provides an image contrast enhancement method which is characterized by scene adaptability, the method comprises three modules of contrast poor scene detection, parameter control and contrast enhancement algorithm validation, the category of the contrast poor scene can be accurately judged, and the most appropriate algorithm is selected for contrast enhancement processing according to scene self-adaptation.
Example one
Fig. 2 is a flowchart of a method for enhancing image contrast according to an embodiment of the present application, where the present application may be adapted to adjust the contrast of an image, and the method may be executed by an apparatus for enhancing image contrast according to an embodiment of the present application, where the apparatus may be implemented by software and/or hardware, and may be integrated in an electronic device such as a smart terminal.
As shown in fig. 2, the method for enhancing image contrast includes:
s210, determining image characteristics of the source image.
The source image is an image that needs to be subjected to contrast processing, and the final processing result of the image may not be subjected to the contrast processing of the image, for example, the contrast of the image already meets the standard meeting the use requirement of the user. Here, the explanation will be given for the purpose of processing the contrast of the image.
The image feature of the source image may be the feature of a specific area in the image or the feature of the whole image. The image features may be gray values of pixel points in the image, and may even include the source image itself, that is, image features that are not processed. In the present embodiment, the image characteristics of the source image may include, but are not limited to, brightness distribution, gradient distribution, higher derivative distribution, and the like. Wherein the luminance distribution may be a luminance histogram, the gradient distribution may be a gradient histogram, and the higher derivative distribution may be a reciprocal distribution histogram of second order or third order. In this embodiment, one or more of the image features may be selected, for example, a luminance histogram and a gradient histogram of the source image may be obtained at the same time, and then feature combination may be performed to obtain input data to be input into the scene classification model.
In this embodiment, optionally, the method includes: extracting basic image features of a source image, wherein the basic image features comprise at least one of a brightness histogram, a gradient histogram and a high-order derivative distribution; and carrying out format conversion on the basic image characteristics according to the input data format of the scene classification model. The method has the advantages that the features which can reflect the contrast of the source image most can be adopted to input the model to obtain an accurate classification result, and the accuracy of contrast adjustment can be improved.
Fig. 3 is a schematic diagram of a feature combination provided in an embodiment of the present application. As shown in fig. 3, after obtaining the source image, the luminance histogram and the gradient histogram of the source image may be obtained, and then the normalization processing may be performed on the two histograms, and the obtained normalization processing results may be combined to obtain the feature combination. The feature combination is input data of the input scene classification model.
More specifically, the gradient histogram can be obtained as follows:
the gradient map is calculated as follows:
wherein G is a gradient histogram, I is a source image,for convolution operations,GxAs a differential image in the horizontal direction, GyAre vertical direction difference graphs.
After obtaining the brightness histogram and the gradient histogram, histogram statistics and normalization processing are required to be carried out on the brightness histogram and the gradient histogram;
the treatment can be specifically carried out in the following way:
p(k)=nk/MN;
d(k)=mk/MN;
MN is the total number of pixels, nkRepresenting the number of pixels in the image with a pixel value of k, p (k) representing the normalized value of the k-th gray level, mkRepresenting the number of pixels in the gradient image with a pixel value of K, d (K) representing the normalized value of the K-th order gradient, where K ∈ [0, 127 ∈]。
The normalized luminance distribution information and gradient distribution are combined into a format suitable for the model input.
This is because different models use different input data formats, where features must be adjusted to the appropriate dimensions to input the model. For example, for a model with input feature dimensions of 16 × 16, the features are first combined into a one-dimensional vector:
x=[p(0),p(1)…p(127),d(0),d(1)…d(127)];
then, randomly combining 256 characteristic values into a characteristic matrix of 16 multiplied by 16;
at this time, X is the extracted feature that can characterize the source scene image information.
S220, inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model.
The scene classification model may be a model used to classify the scene of the source image. The scene classification model may be an artificially determined model, for example, a model conforming to a certain characteristic may be artificially determined as a certain scene, or a model obtained by using a machine learning algorithm may be used, for example, images belonging to several specific scenes may be input to the algorithm for supervised training, and parameters may be adjusted to obtain that the output belonging scene of each image is consistent with the actual belonging scene. In this embodiment, the scene classification model may be obtained by pre-training. After the image characteristics of the source image are obtained, the scene category of the source image can be output and determined by inputting the image characteristics of the source image into the classification model.
In this embodiment, optionally, the scene category of the source image includes at least one of the following: a low brightness low contrast image, a medium brightness low contrast image, a high brightness low contrast image, and a high contrast image. The advantage of dividing the scene categories of the image into the four categories is that the four scene categories are common scene categories for adjusting the contrast of the image, and can be adjusted in a corresponding manner for each scene category to improve the effect of adjusting the contrast of the image.
The output result may include the scene category to which the image belongs, and may further include other information, for example, may include a probability of each scene category, and the like. Illustratively, the scene type of the image may be represented in a coded form, for example, a low-brightness low-contrast image, a medium-brightness low-contrast image, a high-brightness low-contrast image, and a high-contrast image may be embodied in the forms of 01, 02, 03, and 04, respectively. After the scene classification model inputs the image characteristics of the source image, the scene classification and the probability thereof can be obtained, for example, 01, 0.8; 02,0.2. The output result can be expressed as a probability of 0.8 for the source image to be a low-brightness low-contrast image and a probability of 0.2 for the source image to be a medium-brightness low-contrast image.
In this embodiment, optionally, the output result includes a candidate scene category number and a probability value corresponding to the candidate scene category number; correspondingly, the determining the scene classification of the source image according to the output result of the scene classification model comprises: and determining the candidate scene category with the maximum probability value corresponding to the candidate scene category number as the scene category of the source image. As in the above example, in the case that the probability of 01 is 0.8 and the probability of 02 is 0.2, the scene type of the source image can be determined to be 01, i.e. a low-brightness low-contrast image. The technical scheme has the advantages that the scene type of the image can be calculated more accurately in a probability mode, and the calculation accuracy in the process of the scene type of the source image is improved.
S230, enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm.
Wherein the scene classification of the source image may be determined based on an output of the scene classification model. Furthermore, the algorithm corresponding to the scene category of the source image can be selected to carry out contrast enhancement processing on the source image according to the mapping relation between each candidate scene category and the candidate contrast enhancement algorithm.
The mapping relationship between each candidate scene category and the candidate contrast enhancement algorithm may be obtained by statistical analysis during the processing of the image, and the contrast enhancement algorithm corresponding to each candidate scene category. For example, the AGCWD algorithm may be used for low-brightness low-contrast images, the BBHE algorithm may be used for medium-brightness low-contrast images, the power-of-the-day transform algorithm may be used for high-brightness low-contrast images, and the two-dimensional gamma mapping algorithm may be used for high-contrast images. It is understood that the above algorithms are exemplary algorithms, and more algorithms may be mapped to the scene category of each image, and specifically, which algorithm is specifically used for performing the contrast enhancement operation may be determined according to other parameters of the image, for example, according to an average gray value of the source image, or according to a gradient distribution condition in a horizontal or vertical direction of the source image. The advantage of this embodiment is that the influence of personal subjective factors on the contrast enhancement of the image caused by human determination can be avoided, so that the accuracy of the resolution enhancement of the image can be improved.
According to the technical scheme provided by the embodiment of the application, the image characteristics of the source image are determined; inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model; and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm. By adopting the technical scheme provided by the application, the effect of accurately enhancing the contrast of the image can be respectively realized aiming at the scenes with poor contrast.
On the basis of the foregoing technical solutions, preferably, the enhancing the contrast of the source image according to the scene category of the source image and the mapping relationship between the candidate scene category and the candidate contrast enhancing algorithm includes: determining a target contrast enhancement algorithm of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm; performing primary enhancement processing on the contrast of the source image by adopting a target contrast enhancement algorithm of the source image; and adjusting the result of the primary enhancement processing according to the probability value of the scene category to obtain a final enhancement processing result. According to the technical scheme, the target contrast enhancement algorithm of the source image can be determined through the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm, and besides, the probability of the scene category of the source image output by the scene classification model is introduced into the influence weight for determining the contrast enhancement algorithm, so that the control on the contrast enhancement algorithm of the source image can be more accurately improved.
On the basis of the above technical solution, preferably, the adjusting the result of the preliminary enhancement processing according to the probability value of the scene category to obtain a final enhancement processing result includes:
calculating the final enhancement processing result by the following formula:
Out=S·O(x,y)+(1-S)·I(x,y);
wherein Out is the final enhancement processing result, O (x, y) is the preliminary enhancement processing result, S is the probability value of the scene category, and I (x, y) is the source image.
By adopting the formula, the contrast of the source image can be adjusted more accurately and reasonably, and the accuracy of the contrast enhancement algorithm process of the source image is improved by controlling the weight occupied by each scene in the contrast adjustment algorithm of the source image.
Example two
In order to enable a person skilled in the art to more accurately understand the technical solutions provided in the present application, the present application also provides a preferred embodiment.
The method comprises the steps of firstly, classifying scenes of input images, classifying the scenes into four scenes of low-brightness low-contrast images, medium-brightness low-contrast images, high-brightness low-contrast images and high-contrast images according to the extracted image feature information of a source image, and outputting scene parameters.
And secondly, selecting an adjusting algorithm which is most suitable for the scene images in the class by combining the scene parameters, carrying out contrast enhancement operation, and outputting the processed images. The method comprises the following steps: the system comprises a scene classification module, a parameter control module and an algorithm validation module.
The scene classification module identifies scene features of the input source scene images for guiding a subsequent contrast enhancement processing algorithm. The method specifically comprises two parts of image feature extraction and scene discrimination. The input is a source scene image and the output is a scene category.
Image feature extraction: the module is responsible for extracting image features with high correlation with image contrast so as to facilitate scene discrimination. The input is a source scene image and the output is a contrast characteristic.
The image feature extracted here may be a source scene image, or may be extraction and processing based on specific information of the source scene image, including but not limited to information such as a luminance histogram, a gradient distribution, a higher-order derivative distribution, and the like.
And (3) scene discrimination: the module is responsible for carrying out scene discrimination on the features output by the feature extraction module, the input is the extracted scene image features, and the output is the discrimination probability of each scene.
The method adopts a deep learning model to judge the scene, extracts high-level abstract features of the scene based on a large amount of training data, and can realize better scene judgment accuracy, and the available deep learning models comprise models of a multilayer perceptron, LeNet-5, MobilNet, ResNet and the like.
An embodiment of the present invention is provided below, and fig. 4 is a schematic diagram of a framework of a scene separation model provided in the second embodiment of the present application, where the scene separation model is composed of two convolution structures and two full-connected layers, each convolution structure includes a convolution layer, an activation layer, and a pooling layer, the convolution layer mainly functions to extract local features, the convolution kernel dimension is 3 × 3, and the step lengths are all 1; adding nonlinear factors into the model by the active layer to enhance the fitting capability of the model, specifically using Softmax on the output layer, and using Relu active functions on other layers; the pooling layer mainly completes data dimension reduction, maximum pooling is used, and the step length is 2; the fully connected layer is used for recombining the local features extracted from the convolutional layer, the number of neurons in the last fully connected layer is 4, and the characterization model classifies the data into 4 classes.
Training a model on a data set with known scene information, and performing parameter learning by using a Stochastic Gradient Descent (SGD), wherein the process of performing scene discrimination on the trained model comprises the following steps:
1) for input characteristic X16×16The operation of the convolution part for calculating the ith local feature is as follows:
Ai=Max_pool(ReLU(Conv(X,Wi)+bi));
wherein WiRepresents the ith convolution kernel of the convolution layer, biRepresenting the weight, A, corresponding to the ith convolution kerneliThe ith input matrix which will become the next layer, Conv denotes convolution operation, ReLU is the activation function, Max _ pool denotes maximum poolingFor the first convolutional layer, i e [1,4 ]]For the second convolution layer, i e [1,8 ]]. At this time, X is the extracted feature that can characterize the source scene image information.
2) The operation of the j-th output neuron obtained by the full connection layer is as follows:
aj=Activate(zj);
wherein a isiI-th element, w, representing the input feature matrix of the previous layeriRepresenting the full connected layer weight corresponding to the neuron, bjIs the bias corresponding to the jth output neuron, Activate represents the activation function, the other layers use the Relu activation function, and for the output layer activation function use Softmax:
Kj=softmax(zj);
the output layer will output four probability values K1、K2、K3、K4∈[0,1]Each value represents a scene category.
The function of the parameter control module is to give the scene category which can represent the current image most according to the four scene probabilities obtained by the last module. The input is four scene probability values, and the output is a scene category and an adjusting parameter.
The parameter control module realizes the establishment of a mapping relation between the classification result of the scene classification module and a subsequent contrast enhancement processing algorithm, finds a processing algorithm which is most suitable for the image, and controls the intensity of contrast enhancement. The specific embodiment is as follows:
p=argmax(Kp);
wherein, the adjusting parameters are as follows:
S=Kp;
the algorithm validation module is responsible for guiding the selection of the contrast enhancement algorithm according to the parameters output by the scene judgment module and adaptively enhancing the contrast of the scene. The input is a source scene image, a scene category and an adjusting parameter, and the output is an image with enhanced contrast.
The scene type p corresponds to the scene type, wherein p 1 represents that the source scene image is a low-brightness low-contrast scene, a large number of pixels are distributed in a low-gray level range in a centralized manner, the image is dark as a whole, the difference between the brightness levels is small, and detailed information cannot be seen clearly. The processing algorithm of the scene generally takes brightening of dark pixel values as a main point, the brightness of the image is increased along with the brightening of the dark pixel values, the dynamic range of a low-brightness area is further enlarged, original invisible detail information is displayed, the dynamic range of the processed image is stretched, and the overall contrast of the image is enhanced. The algorithms that can be used are the AGCWD algorithm, the WTHE algorithm, the logarithmic mapping, etc.
And p is 2, the source scene image is a medium-brightness low-contrast scene, a large number of pixels are distributed in a middle gray level range in a centralized mode, the difference between brightness levels is small, the permeability of the image is poor, and fog feeling is strong. The processing algorithm of the scene can generally enable originally darker pixel points to become darker, bright pixel points to become brighter, the dynamic range of the middle brightness area is enlarged, the dynamic range of the processed image is stretched, and the overall contrast is improved. The algorithms which can be adopted include BBHE algorithm based on brightness maintenance, sigmod function transformation, hyperbolic tangent curve mapping and the like.
And p is 3, the source scene image is a high-brightness low-contrast scene, a large number of pixels are distributed in a high gray level range in a centralized mode, the image is bright overall, the difference between brightness levels is small, and the image quality is poor. The processing algorithm of the scene generally reduces the darker pixel values in the bright area, the bright pixel values are kept unchanged or reduced as much as possible, the image brightness is reduced, the dynamic range of the high-brightness area is enlarged, the dynamic range of the processed image is stretched, and the image contrast is enhanced. Among the algorithms that may be employed are power transforms, inverse logarithmic transforms, piecewise linear transforms, and the like.
And p is 4, the source scene image is a high-contrast scene, a large number of pixels are intensively distributed at a low gray level and a high gray level, the overall contrast of the image is too high, a large-area bright area and a dark area appear in the image, the difference between the brightness levels in the bright local area and the dark local area is small, and the image quality is poor. The processing algorithm of the scene generally reduces darker pixel values in a bright area, bright pixel values are kept unchanged as much as possible, meanwhile brighter pixel values in a dark area are increased, dark pixel points are kept unchanged as much as possible, the dynamic range of the bright area and the dark area is increased, and the image contrast is improved. The algorithms which can be adopted include a CLAHE algorithm, two-dimensional gamma mapping, inverse hyperbolic tangent curve mapping and the like.
A specific embodiment is provided below, taking scene four as an example (p ═ 4), and performing contrast enhancement processing on a source scene image by using a two-dimensional gamma transform function:
γ(x,y)=22Ig(x,y)-1;
wherein Ig (x, y) is a gaussian smoothing after brightness normalization, γ (x, y) is an adaptive gamma parameter, I (x, y) is an input source scene image, and O (x, y) is an image after preliminary contrast enhancement.
Using the adjustment parameter S, the intensity of image contrast enhancement is adaptively controlled:
Out=S·O(x,y)+(1-S)·I(x,y);
finally, the image Out after the contrast enhancement processing is output.
The invention provides a method for enhancing the contrast of an image, which can self-adaptively realize the adjustment of the contrast according to scene information; the method for training the deep learning model to classify the scenes by combining information such as the brightness histogram, the high-order derivative and the like is provided, and the scenes with poor contrast can be accurately classified; the processing method for guiding the subsequent contrast enhancement according to the contrast classification parameters is provided, the parameter adjustment can be adaptively realized, and a better contrast enhancement effect is achieved.
EXAMPLE III
Fig. 5 is a schematic structural diagram of an image contrast enhancement apparatus according to a third embodiment of the present application. As shown in fig. 5, the apparatus for enhancing image contrast includes:
an image feature determination module 510 for determining image features of a source image;
a scene classification module 520, configured to input the image characteristics of the source image into a scene classification model, and determine a scene category of the source image according to an output result of the scene classification model;
and the contrast enhancement processing module 530 is configured to perform enhancement processing on the contrast of the source image according to the scene category of the source image and the mapping relationship between the candidate scene category and the candidate contrast enhancement algorithm.
According to the technical scheme provided by the embodiment of the application, the image characteristics of the source image are determined; inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model; and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm. By adopting the technical scheme provided by the application, the effect of accurately enhancing the contrast of the image can be respectively realized aiming at the scenes with poor contrast.
On the basis of the foregoing technical solutions, optionally, the image feature determining module includes:
a basic image feature extraction unit, configured to extract basic image features of a source image, where the basic image features include at least one of a luminance histogram, a gradient histogram, and a higher-order derivative distribution;
and the format conversion unit is used for carrying out format conversion on the basic image characteristics according to the input data format of the scene classification model.
On the basis of the foregoing technical solutions, optionally, the scene category of the source image includes at least one of the following: a low brightness low contrast image, a medium brightness low contrast image, a high brightness low contrast image, and a high contrast image.
On the basis of the above technical solutions, optionally, the output result includes a candidate scene category number and a probability value corresponding to the candidate scene category number;
correspondingly, the scene classification module comprises a scene category determination unit configured to:
and determining the candidate scene category with the maximum probability value corresponding to the candidate scene category number as the scene category of the source image.
On the basis of the above technical solutions, optionally, the contrast enhancement processing module includes:
the target algorithm determining unit is used for determining a target contrast enhancement algorithm of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm;
the primary enhancement processing unit is used for carrying out primary enhancement processing on the contrast of the source image by adopting a target contrast enhancement algorithm of the source image;
and the adjusting unit is used for adjusting the result of the primary enhancement processing according to the probability value of the scene category to obtain a final enhancement processing result.
On the basis of the above technical solutions, optionally, the adjusting unit is specifically configured to:
calculating the final enhancement processing result by the following formula:
Out=S·O(x,y)+(1-S)·I(x,y);
wherein Out is the final enhancement processing result, O (x, y) is the preliminary enhancement processing result, S is the probability value of the scene category, and I (x, y) is the source image.
The product can execute the method provided by any embodiment of the application, and has the corresponding functional module and the beneficial effect of the execution method.
Example four
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method of image contrast enhancement, the method comprising:
determining image characteristics of a source image;
inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model;
and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in the computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network (such as the internet). The second computer system may provide the program instructions to the computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in this embodiment of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the operation of enhancing image contrast as described above, and may also perform related operations in the method for enhancing image contrast provided in any embodiment of the present application.
EXAMPLE five
The embodiment of the application provides electronic equipment, and the electronic equipment can be integrated with the image contrast enhancement device provided by the embodiment of the application. Fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application. As shown in fig. 6, the present embodiment provides an electronic device 600, which includes: one or more processors 620; the storage 610 is configured to store one or more programs, and when the one or more programs are executed by the one or more processors 620, the one or more processors 620 are enabled to implement the method for enhancing image contrast provided in the embodiment of the present application, the method includes:
determining image characteristics of a source image;
inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model;
and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm.
Of course, those skilled in the art will appreciate that the processor 620 may also implement the technical solution of the method for enhancing image contrast provided in any of the embodiments of the present application.
The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the electronic device 600 includes a processor 620, a storage device 610, an input device 630, and an output device 640; the number of the processors 620 in the electronic device may be one or more, and one processor 620 is taken as an example in fig. 6; the processor 620, the storage device 610, the input device 630, and the output device 640 in the electronic apparatus may be connected by a bus or other means, and are exemplified by being connected by a bus 650 in fig. 6.
The storage device 610 is a computer-readable storage medium, and can be used to store software programs, computer-executable programs, and module units, such as program instructions corresponding to the image contrast enhancement method in the embodiments of the present application.
The storage device 610 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. In addition, the storage 610 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage 610 may further include memory located remotely from the processor 620, which may be connected via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 630 may be used to receive input numbers, character information, or voice information, and to generate key signal inputs related to user settings and function control of the electronic device. The output device 640 may include a display screen, speakers, etc.
The electronic equipment provided by the embodiment of the application can respectively realize the effect of accurately enhancing the contrast of the image aiming at the poor scenes with different contrasts.
The image contrast enhancement device, the storage medium and the electronic device provided in the above embodiments may execute the image contrast enhancement method provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. Technical details not described in detail in the above embodiments may be referred to a method for enhancing image contrast provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.
Claims (10)
1. A method for enhancing image contrast, comprising:
determining image characteristics of a source image;
inputting the image characteristics of the source image into a scene classification model, and determining the scene category of the source image according to the output result of the scene classification model;
and enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm.
2. The method of claim 1, wherein determining image features of a source image comprises:
extracting basic image features of a source image, wherein the basic image features comprise at least one of a brightness histogram, a gradient histogram and a high-order derivative distribution;
and carrying out format conversion on the basic image characteristics according to the input data format of the scene classification model.
3. The method of claim 1, wherein the scene classification of the source image comprises at least one of: a low brightness low contrast image, a medium brightness low contrast image, a high brightness low contrast image, and a high contrast image.
4. The method of claim 1, wherein the output result comprises a candidate scene category number and a probability value corresponding to the candidate scene category number;
correspondingly, the determining the scene classification of the source image according to the output result of the scene classification model comprises:
and determining the candidate scene category with the maximum probability value corresponding to the candidate scene category number as the scene category of the source image.
5. The method as claimed in claim 4, wherein the enhancing the contrast ratio of the source image according to the scene classification of the source image and the mapping relationship between the candidate scene classification and the candidate contrast ratio enhancing algorithm comprises:
determining a target contrast enhancement algorithm of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm;
performing primary enhancement processing on the contrast of the source image by adopting a target contrast enhancement algorithm of the source image;
and adjusting the result of the primary enhancement processing according to the probability value of the scene category to obtain a final enhancement processing result.
6. The method of claim 5, wherein adjusting the result of the preliminary enhancement processing according to the probability value of the scene category to obtain a final enhancement processing result comprises:
calculating the final enhancement processing result by the following formula:
Out=S·O(x,y)+(1-S)·I(x,y);
wherein Out is the final enhancement processing result, O (x, y) is the preliminary enhancement processing result, S is the probability value of the scene category, and I (x, y) is the source image.
7. An apparatus for enhancing image contrast, comprising:
the image characteristic determining module is used for determining the image characteristics of the source image;
the scene classification module is used for inputting the image characteristics of the source image into a scene classification model and determining the scene category of the source image according to the output result of the scene classification model;
and the contrast enhancement processing module is used for enhancing the contrast of the source image according to the scene category of the source image and the mapping relation between the candidate scene category and the candidate contrast enhancement algorithm.
8. The apparatus of claim 7, wherein the image feature determination module comprises:
a basic image feature extraction unit, configured to extract basic image features of a source image, where the basic image features include at least one of a luminance histogram, a gradient histogram, and a higher-order derivative distribution;
and the format conversion unit is used for carrying out format conversion on the basic image characteristics according to the input data format of the scene classification model.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of image contrast enhancement according to any one of claims 1 to 6.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of image contrast enhancement as claimed in any one of claims 1 to 6 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526097.2A CN112102175B (en) | 2019-06-18 | 2019-06-18 | Image contrast enhancement method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910526097.2A CN112102175B (en) | 2019-06-18 | 2019-06-18 | Image contrast enhancement method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112102175A true CN112102175A (en) | 2020-12-18 |
CN112102175B CN112102175B (en) | 2024-03-26 |
Family
ID=73748680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910526097.2A Active CN112102175B (en) | 2019-06-18 | 2019-06-18 | Image contrast enhancement method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112102175B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598595A (en) * | 2020-12-25 | 2021-04-02 | 北京环境特性研究所 | High-dynamic digital image display enhancement method and system based on gamma correction |
CN116777795A (en) * | 2023-08-21 | 2023-09-19 | 江苏游隼微电子有限公司 | Luminance mapping method suitable for vehicle-mounted image |
CN118506743A (en) * | 2024-07-15 | 2024-08-16 | 深圳创维显示技术有限公司 | Brightness adjusting method and device for display equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120250988A1 (en) * | 2011-03-29 | 2012-10-04 | Ya-Ti Peng | Adaptive contrast adjustment techniques |
CN104240194A (en) * | 2014-04-29 | 2014-12-24 | 西南科技大学 | Low-light-level image enhancement algorithm based on parabolic function |
CN106548463A (en) * | 2016-10-28 | 2017-03-29 | 大连理工大学 | Based on dark and the sea fog image automatic defogging method and system of Retinex |
CN109685746A (en) * | 2019-01-04 | 2019-04-26 | Oppo广东移动通信有限公司 | Brightness of image method of adjustment, device, storage medium and terminal |
-
2019
- 2019-06-18 CN CN201910526097.2A patent/CN112102175B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120250988A1 (en) * | 2011-03-29 | 2012-10-04 | Ya-Ti Peng | Adaptive contrast adjustment techniques |
CN104240194A (en) * | 2014-04-29 | 2014-12-24 | 西南科技大学 | Low-light-level image enhancement algorithm based on parabolic function |
CN106548463A (en) * | 2016-10-28 | 2017-03-29 | 大连理工大学 | Based on dark and the sea fog image automatic defogging method and system of Retinex |
CN109685746A (en) * | 2019-01-04 | 2019-04-26 | Oppo广东移动通信有限公司 | Brightness of image method of adjustment, device, storage medium and terminal |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112598595A (en) * | 2020-12-25 | 2021-04-02 | 北京环境特性研究所 | High-dynamic digital image display enhancement method and system based on gamma correction |
CN116777795A (en) * | 2023-08-21 | 2023-09-19 | 江苏游隼微电子有限公司 | Luminance mapping method suitable for vehicle-mounted image |
CN116777795B (en) * | 2023-08-21 | 2023-10-27 | 江苏游隼微电子有限公司 | Luminance mapping method suitable for vehicle-mounted image |
CN118506743A (en) * | 2024-07-15 | 2024-08-16 | 深圳创维显示技术有限公司 | Brightness adjusting method and device for display equipment |
CN118506743B (en) * | 2024-07-15 | 2024-10-11 | 深圳创维显示技术有限公司 | Brightness adjusting method and device for display equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112102175B (en) | 2024-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675328B (en) | Low-illumination image enhancement method and device based on condition generation countermeasure network | |
CN110610463A (en) | Image enhancement method and device | |
US20230080693A1 (en) | Image processing method, electronic device and readable storage medium | |
CN111292264A (en) | Image high dynamic range reconstruction method based on deep learning | |
CN112102175B (en) | Image contrast enhancement method and device, storage medium and electronic equipment | |
CN111047543A (en) | Image enhancement method, device and storage medium | |
CN113592776A (en) | Image processing method and device, electronic device and storage medium | |
CN105046202A (en) | Adaptive face identification illumination processing method | |
Jeon et al. | Low-light image enhancement using inverted image normalized by atmospheric light | |
CN116580305A (en) | Tea bud detection method based on deep learning and model building method thereof | |
CN115731597A (en) | Automatic segmentation and restoration management platform and method for mask image of face mask | |
CN112102348A (en) | Image processing apparatus | |
CN115187954A (en) | Image processing-based traffic sign identification method in special scene | |
CN117314793B (en) | Building construction data acquisition method based on BIM model | |
CN117649694A (en) | Face detection method, system and device based on image enhancement | |
CN117456230A (en) | Data classification method, system and electronic equipment | |
Zhang et al. | A novel DenseNet Generative Adversarial network for Heterogenous low-Light image enhancement | |
CN108564534A (en) | A kind of picture contrast method of adjustment based on retrieval | |
CN111275642A (en) | Low-illumination image enhancement method based on significant foreground content | |
CN115457614B (en) | Image quality evaluation method, model training method and device | |
CN111539420B (en) | Panoramic image saliency prediction method and system based on attention perception features | |
CN116824419A (en) | Dressing feature recognition method, recognition model training method and device | |
CN118506407B (en) | Light pedestrian re-recognition method and system based on random color discarding and attention | |
KR102617391B1 (en) | Method for controlling image signal processor and control device for performing the same | |
CN113808055B (en) | Plant identification method, device and storage medium based on mixed expansion convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |