CN115861119A - Rock slag image color cast correction method based on deep convolutional neural network - Google Patents

Rock slag image color cast correction method based on deep convolutional neural network Download PDF

Info

Publication number
CN115861119A
CN115861119A CN202211640068.7A CN202211640068A CN115861119A CN 115861119 A CN115861119 A CN 115861119A CN 202211640068 A CN202211640068 A CN 202211640068A CN 115861119 A CN115861119 A CN 115861119A
Authority
CN
China
Prior art keywords
image
color
color cast
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211640068.7A
Other languages
Chinese (zh)
Inventor
王珩
张艾森
侯建勤
余芸
何斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Process Automation Instrumentation
Original Assignee
Shanghai Institute of Process Automation Instrumentation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Process Automation Instrumentation filed Critical Shanghai Institute of Process Automation Instrumentation
Priority to CN202211640068.7A priority Critical patent/CN115861119A/en
Publication of CN115861119A publication Critical patent/CN115861119A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a rock slag image color cast correction method based on a deep convolutional neural network, and relates to the technical field of image correction. The method comprises the following steps: acquiring a plurality of standard images and corresponding color cast images; converting the standard image and the color cast image from an RGB color space to a CIELab color space; performing convolutional neural network training based on CIELab color values of the standard image and the color cast image to establish a color cast correction model; and performing color cast correction on the new color cast rock slag image by adopting a color cast correction model to output a corrected rock slag image. The rock slag sample image is converted from the RGB space to the CIELab space, so that the method is more suitable for classification detection of rear-end rock slag images, the correction model is established by utilizing the convolutional neural network, the color cast correction of the rock slag image is realized, and higher image quality is obtained at low cost.

Description

Rock slag image color cast correction method based on deep convolutional neural network
Technical Field
The invention relates to the technical field of image correction, in particular to a rock slag image color cast correction method based on a deep convolutional neural network.
Background
A full-face hard rock Tunnel Boring Machine (TBM) is a factory tunnel construction high-end device integrated by a guiding system, a boring system, a supporting system, a slag discharging system and the like. The geological condition can be predicted by the classification judgment of the rock slag images in the slag tapping system, and the rock condition tunneled by the tunneling machine is judged. In recent years, classification of rock slag images detected in real time is realized through field edge side mounting terminals. However, as the existing tunneling equipment is in a bad environment and the illumination condition is complex, in the process of shooting the rock slag picture, the color distortion (color distortion) of the image can be caused by the change of the shooting scene or the change of the illumination condition, and the poor reality is not beneficial to the classification detection of the rear-end picture.
According to the RGB three-primary-color principle, color cast can be considered as that incident light generates nonlinear response in three channels of the cameras R, G, and B, thereby causing color distortion (color distortion) of the image. There are several main types of common image color cast correction methods:
(1) Polynomial regression method: and correcting based on the mapped colors, and mapping and correcting the color cast image acquired under natural illumination by acquiring the colors of the standard color plate.
(2) BP neural network method: the BP neural network trains the color block value of the standard color card through the network structure of the input layer, the hidden layer and the output layer, extracts the RGB value from the color card picture to be corrected as an input value, takes the color card standard value as a supervision value, trains and tests, and outputs a corrected new image.
(3) Support Vector Machine (SVM) method: the input space is transformed to a high-dimensional space through nonlinear transformation, and then the optimal linear classification surface is obtained in the high-dimensional space.
Although there are many existing color cast correction methods, the mature methods actually applied to the real-time rock slag image color cast processing of the tunneling field environment are not many, and the existing technology of the correction algorithm has the defects that: the RGB three primary colors space can simply calculate the color cast coefficient, but has limitation, when the difference between two colors is described by Euclidean distance, the calculated difference between the two colors cannot correctly represent the real difference between the two colors actually perceived by people; aiming at satisfying the gray world assumption, the gray balance method calculates the chromaticity distance from a neutral point to judge whether the color is color cast, but the color cast is not suitable for an image which is too bright or too dark or has a single color; the white balance method also has no versatility, and when no white or highlight portion exists in the photographic subject, the color cast detection result is distorted.
Disclosure of Invention
The invention aims to provide a rock slag image color cast correction method based on a deep convolutional neural network to solve the problem of color cast correction of a rock slag image in view of the defects of the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the invention provides a rock slag image color cast correction method based on a deep convolutional neural network, which comprises the following steps:
acquiring a plurality of standard images and corresponding color cast images, wherein the standard images represent rock slag images acquired under standard illumination light sources, and the color cast images represent rock slag images acquired under non-standard illumination;
converting the standard image and the color cast image from an RGB color space to a CIELab color space;
performing convolutional neural network training based on CIELab color values of the standard image and the color cast image to establish a color cast correction model;
and performing color cast correction on the new color cast rock slag image by adopting a color cast correction model to output a corrected rock slag image.
Optionally, the converting the standard image and the color cast image from an RGB color space to a CIELab color space includes:
the standard image and the color cast image are converted from the RGB color space to the XYZ space using the following equation,
Figure BDA0004007461920000031
wherein, R, G and B represent the tristimulus values of the image, and X, Y and Z represent the values calculated after the image is converted from RGB color space to XYZ space;
the standard image and the color cast image are then converted from XYZ space to CIELab color space using the following equations,
L=116f(Y/Y n )-16
a=500[f(X/X n )-f(Y/Y n )]
b=200[f(Y/Y n )-f(Z/Z n )]
wherein L, a, b represent CIELab color values of the image, X n 、Y n 、Z n Representing the tristimulus values of the white point of the preset reference image, and the function f is the following function:
Figure BDA0004007461920000032
alternatively, X n 、Y n 、Z n Are 95.047, 100.0, 108.883 in sequence.
Optionally, the performing convolutional neural network training based on the CIELab color values of the standard image and the color cast image to establish a color cast correction model includes: and taking the CIELab color value of the color cast image as the input of the convolutional neural network, taking the CIELab color value of the corresponding standard image as the output of the convolutional neural network, and performing convolutional neural network training to obtain a color correction coefficient and a weight of the convolutional neural network so as to establish a color cast correction model.
Optionally, when performing convolutional neural network training, an improved deep convolutional neural network LeNet-L network structure is adopted, and the improved deep convolutional neural network LeNet-L network structure is as follows: the network structure is based on a LeNet-5 network structure, a full connection layer is added, and the network structure comprises 9 layers from an input layer to an output layer, wherein the network structure comprises the input layer, 2 layers of convolution layers, 2 layers of pooling layers, 3 layers of full connection layers and the output layer.
Optionally, when performing convolutional neural network training, for a shot rock debris image, firstly, randomly selecting a part of area in the image as a training sample, learning features from the part of sample, then, performing convolutional operation on the learned features and an original whole image by using the learned features as a filter, thereby obtaining activation values of different features of any position of the original image;
the convolution operation formula of the convolution layer is as follows:
Figure BDA0004007461920000041
wherein, l represents the number of convolution layers,
Figure BDA0004007461920000042
j-th feature map representing the l-th layer, f is the activation function, M j Is the jth feature map of layer l-1, N represents a convolution kernel, and 5 × 5 convolution kernels are selected for filtering and/or based on the result of the filtering>
Figure BDA0004007461920000043
And for the bias corresponding to the jth characteristic diagram of the ith layer, judging whether the output of each neuron reaches a threshold value or not by using an activation function, wherein the following neuron is activated only when the weighted sum value of the signals transmitted by the front dendrite is larger than a preset specific threshold value.
Optionally, the activation function f is a RELU activation function, as shown in the following equation:
f(x)=max(0,x)
the activation function is an unsaturated nonlinear function, wherein x is any neural unit in the characteristic diagram, when an input signal x is less than 0, the output of the activation function is 0, and when x is more than 0, the output of the activation function is equal to the input.
Optionally, in the sub-sampling process, the sub-sampling formula is as follows:
Figure BDA0004007461920000051
wherein k is a weight coefficient and the down function is a pooling function.
Optionally, the performing, for the new color-cast rock slag image, color-cast correction using a color-cast correction model to output a corrected rock slag image includes:
inputting a new color cast rock slag image;
converting the new color cast rock slag image from an RGB color space to a CIELab color space;
performing color cast correction on the CIELab color value of the new color cast rock slag image by adopting a color cast correction model to output a correction result;
the correction results are transformed into the RGB color space to obtain a corrected rock slag image.
The beneficial effects of the invention include:
the rock slag image color cast correction method of the deep convolutional neural network provided by the invention comprises the following steps: acquiring a plurality of standard images and corresponding color cast images, wherein the standard images represent rock slag images acquired under standard illumination light sources, and the color cast images represent rock slag images acquired under non-standard illumination; converting the standard image and the color cast image from an RGB color space to a CIELab color space; performing convolutional neural network training based on CIELab color values of the standard image and the color cast image to establish a color cast correction model; and performing color cast correction on the new color cast rock slag image by using a color cast correction model to output a corrected rock slag image. The rock slag sample image is converted from the RGB space to the CIELab space, so that the method is more suitable for classification detection of rear-end rock slag images, the correction model is established by utilizing the convolutional neural network, the color cast correction of the rock slag image is realized, and higher image quality is obtained at low cost.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating a rock slag image color cast correction method of a deep convolutional neural network according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of establishing a color cast correction model based on a deep convolutional neural network according to an embodiment of the present invention;
FIG. 3 shows a CIELab color space diagram;
FIG. 4 shows a LeNet-L deep convolutional neural network framework diagram;
FIG. 5 illustrates a diagram of maximum pooling operation;
FIG. 6 shows a fully connected layer output schematic;
fig. 7 shows a schematic flowchart of the convolutional neural network color cast correction provided in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A full-face hard rock Tunnel Boring Machine (TBM) is a factory tunnel construction high-end device integrated by a guiding system, a boring system, a supporting system, a slag discharging system and the like. The geological condition can be predicted by the classification judgment of the rock slag images in the slag tapping system, and the rock condition tunneled by the tunneling machine is judged. In recent years, classification of rock slag images detected in real time is realized through field edge side mounting terminals. However, as the existing tunneling equipment is in a bad environment and the illumination condition is complex, in the process of shooting the rock slag picture, the color distortion (color distortion) of the image can be caused by the change of the shooting scene or the change of the illumination condition, and the poor reality is not beneficial to the classification detection of the rear-end picture. In order to be closer to the actual situation of the environmental rock slag picture of the tunneling field, the color cast correction method is based on a deep convolution neural network and used for carrying out color cast correction on the shot color distortion image.
Fig. 1 is a schematic flow chart illustrating a rock slag image color cast correction method of a deep convolutional neural network according to an embodiment of the present invention, and as shown in fig. 1, the rock slag image color cast correction method based on the deep convolutional neural network according to the present invention includes: acquiring a plurality of standard images and corresponding color cast images, wherein the standard images represent rock slag images acquired under standard illumination light sources, and the color cast images represent rock slag images acquired under non-standard illumination; converting the standard image and the color cast image from an RGB color space to a CIELab color space; performing convolutional neural network training based on CIELab color values of the standard image and the color cast image to establish a color cast correction model; and performing color cast correction on the new color cast rock slag image by adopting color cast correction to output a corrected rock slag image.
The rock slag sample image is converted from the RGB space to the CIELab space, so that the method is more suitable for classification detection of rear-end rock slag images, the problem of color distortion of the shot image under the complex field condition is solved by utilizing the convolutional neural network to establish a correction model, the color cast correction of the rock slag image is realized, and higher image quality is obtained at low cost.
In actual operation, the overall process of performing color cast correction is as follows: establishing a color cast correction model through a deep convolution neural network, obtaining new L ', a ' and B ' by subjecting R, G and B of an image to be corrected to the color cast correction model, and outputting to obtain an image after color correction.
Fig. 2 shows a schematic flow chart of establishing a color cast correction model based on a deep convolutional neural network according to an embodiment of the present invention.
The CIELab color space is more suitable for the perception of human eyes, so that the CIELab color space is suitable for representing and calculating all light source colors or object colors. Converting the image from an RGB space to a CIELab color space; and acquiring R, G and B values of the image color block, and converting the R, G and B values into a CIELab color space. The color tri-primary value of the ith color block on the rock slag picture to-be-corrected image shot in the tunneling field is R i 、G i 、B i Wherein i =1,2,3 \ 8230n. Corresponding to a CIELab color value of L i 、a i 、b i . The RGB color space cannot be directly converted into CIELab color space, and the RGB color space is firstly converted into CIELab color space by means of XYZ color spaceXYZ space. And selecting a more proper color space, and converting the sample image from the RGB space to the CIELab space, so that the method is more suitable for the classification detection of the rear-end rock slag image.
The converting the standard image and the color cast image from the RGB color space to the CIELab color space comprises:
the standard image and the color cast image are converted from the RGB color space to the XYZ space using the following equation,
Figure BDA0004007461920000081
wherein, R, G and B represent the tristimulus values of the image, and X, Y and Z represent the values calculated after the image is converted from RGB color space to XYZ space;
the standard image and the color cast image are then converted from XYZ space to CIELab color space using the following equations,
L=116f(Y/Y n )-16
a=500[f(X/X n )-f(Y/Y n )]
b=200[f(Y/Y n )-f(Z/Z n )]
wherein L, a, b represent CIELab color values of the image, X n 、Y n 、Z n Representing the tristimulus values of the white point of the preset reference image, and the function f is the following function:
Figure BDA0004007461920000091
alternatively, X n 、Y n 、Z n Are 95.047, 100.0, 108.883 in sequence.
The CIELab color space diagram is shown in fig. 3, where L represents the luminance of a pixel, and has a value in the range of 0,100, which represents pure black to pure white. a. b represents the chromaticity of the pixel, where + a represents red, -a represents green; + b represents yellow and-b represents blue. Brightness difference:
ΔL=L 1 -L 2
when the delta L is positive, the image is represented to be lighter in color; when Δ L is negative, it represents that the image is darker in color.
The chroma difference is as follows:
Δa=a 1 -a 2
Δb=b 1 -b 2
when the delta a is positive, the color of the representative image is reddish; when Δ a is negative, it represents that the image color is greenish. When the delta b is positive, the color of the image is yellow; when Δ b is negative, it represents that the image color is bluish.
Optionally, the performing convolutional neural network training based on the CIELab color values of the standard image and the color cast image to establish a color cast correction model includes: the CIELab color values of the color cast image are taken as the input of the convolutional neural network, and the CIELab color values of the corresponding standard image are taken as the output of the convolutional neural network, that is, the L of the color cast image i 、a i 、b i And (3) taking the value as the input of the convolutional neural network, taking the L, a and b values of the normal standard image as standard output values, and performing convolutional neural network training to obtain a color correction coefficient and a weight value of the convolutional neural network so as to establish a color cast correction model.
Optionally, when performing convolutional neural network training, an improved deep convolutional neural network LeNet-L network structure is adopted, and the improved deep convolutional neural network LeNet-L network structure is as follows: the network structure is based on LeNet-5 network structure, and adds a layer of full connection layer L-K (K represents the number of neurons of the added layer), and the network structure comprises 9 layers from an input layer to an output layer, wherein the network structure comprises the input layer, 2 layers of convolution layer (CONV), 2 layers of pooling layer (POOL), 3 layers of full connection layer (FC) and the output layer, and is shown in figure 4. And during convolution feature extraction, calculating the color difference of each channel, extracting convolution features, and performing pooling training by using an activation function. By adopting a network structure based on a deep convolutional neural network LeNet-L and an improved full-link layer L-K based on a LeNet-5 convolutional neural network, the loss of characteristic information is reduced, and more real image color information is obtained.
Optionally, when performing convolutional neural network training, for a shot rock debris image, firstly, randomly selecting a part of area in the image as a training sample, learning features (image analysis) from the part of sample, then, performing convolutional operation on the learned features and the original whole image by using the learned features as a filter, thereby obtaining activation values of different features at any position of the original image.
The convolution operation formula of the convolutional layer is as follows:
Figure BDA0004007461920000101
wherein, l represents the number of convolution layers,
Figure BDA0004007461920000102
j-th feature map representing the l-th layer, f is the activation function, M j Is the jth feature map of layer l-1, N represents a convolution kernel, and 5 × 5 convolution kernels are selected for filtering and/or based on the result of the filtering>
Figure BDA0004007461920000103
And for the bias corresponding to the jth characteristic diagram of the ith layer, judging whether the output of each neuron reaches a threshold value or not by using an activation function, wherein the following neuron is activated only when the weighted sum value of the signals transmitted by the front dendrite is larger than a preset specific threshold value.
Optionally, the activation function f is a RELU activation function, as shown in the following equation:
f(x)=max(0,x)
the activation function is an unsaturated nonlinear function, wherein x is any neural unit in the characteristic diagram, when an input signal x is less than 0, the output of the activation function is 0, and when x is more than 0, the output of the activation function is equal to the input. The activation function f of the traditional neural network generally adopts a non-saturated non-linear function such as sigmoid, tanh and the like, and the activation function f of the application adopts a RELU activation function, so that an activation value can be obtained only by a threshold value. Similar to the neuron signal excitation principle, the method has the characteristics of unilateral inhibition, wide excitation boundary, sparse activation and the like, and has higher convergence rate compared with the gradient descent training time.
Optionally, in the sub-sampling process, the sub-sampling formula is as follows:
Figure BDA0004007461920000111
wherein k is a weight coefficient and the down function is a pooling function. The number of the input convolution characteristic layers is unchanged after pooling (pool) operation, the stool and the urine are 1/m of the original size (wherein m is the pooling size), and the network computation complexity is simplified.
The present application uses a Max pool (Max pool) operation to select the maximum of the convolution characteristic for a local region, as shown in fig. 5. And performing maximal pooling (Max pool) on 4 non-overlapping subregions in the image by using a window with the size of 3 multiplied by 3 to obtain a feature map after pooling, thereby reducing the number of convolution feature bits and further reducing the data volume of the feature map. The computation complexity is reduced by sub-sampling weight sharing, and overfitting can be prevented, so that the robustness of convolution network displacement and scaling is ensured.
The feature map X after sub-sampling j Transmitting the data to a full connection layer, and establishing a color cast correction model; the improved LetNet-L convolutional neural network is adopted, and based on LetNet-5, a full connection layer is additionally arranged between the pooling layer and the full connection layer, so that the loss of convolution characteristics is reduced, and more real image colors are reserved. Each node of the fully connected layer is connected to all nodes of the previous layer for integrating the extracted features. A simple layer 3 network is used as an example, as shown in fig. 6. Wherein x is output from the pooling layer 1 、x 2 、x 3 Is the input of the full connection layer, y 1 、y 2 、y 3 Is the output of the first fully-connected layer, y' 1 、y′ 2 、y′ 3 Is an added L-K full link layer, y ″) 1 、y″ 2 、y″ 3 Is the final output of the fully connected layer. The output matrix can be obtained by calculating a weight parameter W and a bias parameter b:
Figure BDA0004007461920000121
optionally, the performing, for the new color-cast rock slag image, color-cast correction using a color-cast correction model to output a corrected rock slag image includes:
inputting a new color cast rock slag image;
converting the new color cast rock slag image from an RGB color space to a CIELab color space;
performing color cast correction on the CIELab color value of the new color cast rock slag image by adopting a color cast correction model to output a correction result;
the correction results are transformed into the RGB color space to obtain a corrected rock slag image.
The color distortion image is corrected by the color cast correction model trained by the deep convolution neural network, and the flow chart is shown in fig. 7. And transforming from CIELab space to RGB three primary colors space, and finally outputting the corrected real image. Wherein the transformation formula from CIELab to XYZ is as follows:
Figure BDA0004007461920000122
Figure BDA0004007461920000123
Figure BDA0004007461920000124
the XYZ to RGB color space inverse transform is as follows:
Figure BDA0004007461920000131
in summary, compared with the traditional neural network (such as a BP neural network) color cast correction algorithm, the local connection and weight sharing of the method reduce the complexity of convolutional network training, different image features can be extracted by different convolutional kernels, the instantaneity of an offline training color cast correction model is improved, and the robustness of the network and over-fitting prevention are facilitated. In addition, the improved Convolutional Neural network LeNet-L (LetNet-L connected Neural Networks) is adopted, namely, a full connection layer L-K is added based on LeNet-5, more real color information can be reserved, the loss of characteristic information when the full connection layer extracts the color cast image characteristics can be reduced to a certain extent, and the corrected image is closer to a real color image. According to the method and the device, a more appropriate color space is selected, and compared with the nonuniformity and the device dependence of an RGB color space, the color difference consistent with the actual perception difference is calculated by adopting the CIELab color space, so that the sense of reality of the image color is improved.
The above embodiments are merely illustrative of the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the content of the present invention and implement the present invention, and not to limit the scope of the present invention, and all equivalent changes or modifications made according to the spirit of the present invention should be covered in the scope of the present invention.

Claims (9)

1. A rock slag image color cast correction method based on a deep convolutional neural network is characterized by comprising the following steps:
the method comprises the steps of obtaining a plurality of standard images and corresponding color cast images, wherein the standard images represent rock slag images acquired under a standard illumination light source, and the color cast images represent rock slag images acquired under non-standard illumination;
converting the standard image and the color cast image from an RGB color space to a CIELab color space;
performing convolutional neural network training based on CIELab color values of the standard image and the color cast image to establish a color cast correction model;
and performing color cast correction on the new color cast rock slag image by adopting the color cast correction model to output a corrected rock slag image.
2. The method for rock sediment image color cast correction based on the deep convolutional neural network of claim 1, wherein the converting the standard image and the color cast image from an RGB color space to a CIELab color space comprises:
converting the standard image and the color cast image from an RGB color space to an XYZ space using the following equation,
Figure FDA0004007461910000011
wherein, R, G, B represent RGB tristimulus values of the image, X, Y, Z represent the value calculated after the image is converted from RGB color space to XYZ space;
the standard image and the color cast image are then converted from XYZ space to CIELab color space using the following equation,
L=116f(Y/Y n )-16
a=500[f(X/X n )-f(Y/Y n )]
b=200[f(Y/Y n )-f(Z/Z n )]
wherein L, a, b represent CIELab color values of the image, X n 、Y n 、Z n Representing the tristimulus values of the white point of the preset reference image, and the function f is the following function:
Figure FDA0004007461910000021
3. the rock slag image color cast correction method based on the deep convolutional neural network of claim 2, wherein X is n 、Y n 、Z n Are 95.047, 100.0, 108.883 in sequence.
4. The rock slag image color cast correction method based on the deep convolutional neural network as claimed in claim 1, wherein the convolutional neural network training is performed based on the CIELab color values of the standard image and the color cast image to establish a color cast correction model, and the method comprises the following steps: and taking the CIELab color value of the color cast image as the input of a convolutional neural network, taking the CIELab color value of the corresponding standard image as the output of the convolutional neural network, and performing convolutional neural network training to obtain a color correction coefficient and a weight of the convolutional neural network so as to establish a color cast correction model.
5. The rock slag image color cast correction method based on the deep convolutional neural network as claimed in claim 4, characterized in that, when the convolutional neural network training is performed, an improved deep convolutional neural network LeNet-L network structure is adopted, and the improved deep convolutional neural network LeNet-L network structure is as follows: the network structure is based on a LeNet-5 network structure, a full connection layer is added, and the network structure comprises 9 layers in total from an input layer to an output layer, wherein the network structure comprises the input layer, 2 convolutional layers, 2 pooling layers, 3 full connection layers and the output layer.
6. The rock slag image color cast correction method based on the deep convolutional neural network as claimed in claim 5, characterized in that when the convolutional neural network training is performed, for the shot rock slag image, firstly, a part of area is randomly selected from the image as a training sample, characteristics are learned from the part of sample, and then the learned characteristics are used as a filter to perform convolution operation with the original whole image, so that activation values of different characteristics of any position of the original image are obtained;
the convolution operation formula of the convolution layer is as follows:
Figure FDA0004007461910000031
wherein, l represents the number of convolution layers,
Figure FDA0004007461910000032
j-th feature map representing the l-th layer, f is the activation function, M j Is the jth feature map of layer l-1, N represents a convolution kernel, and 5 × 5 convolution kernels are selected for filtering and/or based on the result of the filtering>
Figure FDA0004007461910000034
And for the bias corresponding to the jth characteristic diagram of the ith layer, judging whether the output of each neuron reaches a threshold value or not by using an activation function, wherein the following neuron is activated only when the weighted sum value of the signals transmitted by the front dendrite is larger than a preset specific threshold value.
7. The rock slag image color cast correction method based on the deep convolutional neural network as claimed in claim 6, wherein the activation function f is RELU activation function as shown in the following formula:
f(x)=max(0,x)
the activation function is an unsaturated nonlinear function, wherein x is any neural unit in the characteristic diagram, when an input signal x is less than 0, the output of the activation function is 0, and when x is more than 0, the output of the activation function is equal to the input.
8. The rock slag image color cast correction method based on the deep convolutional neural network as claimed in claim 7, wherein in the sub-sampling process, the sub-sampling formula is as follows:
Figure FDA0004007461910000033
wherein k is a weight coefficient and the down function is a pooling function.
9. The rock slag image color cast correction method based on the deep convolutional neural network as claimed in claim 1, wherein the performing color cast correction on the new color cast rock slag image by using the color cast correction model to output a corrected rock slag image comprises:
inputting a new color cast rock slag image;
converting the new color cast rock slag image from an RGB color space to a CIELab color space;
performing color cast correction on the CIELab color value of the new color cast rock slag image by adopting the color cast correction model so as to output a correction result;
transforming the correction result to an RGB color space to obtain a corrected rock slag image.
CN202211640068.7A 2022-12-20 2022-12-20 Rock slag image color cast correction method based on deep convolutional neural network Pending CN115861119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211640068.7A CN115861119A (en) 2022-12-20 2022-12-20 Rock slag image color cast correction method based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211640068.7A CN115861119A (en) 2022-12-20 2022-12-20 Rock slag image color cast correction method based on deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN115861119A true CN115861119A (en) 2023-03-28

Family

ID=85674433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211640068.7A Pending CN115861119A (en) 2022-12-20 2022-12-20 Rock slag image color cast correction method based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN115861119A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559119A (en) * 2023-05-11 2023-08-08 东北林业大学 Deep learning-based wood dyeing color difference detection method, system and medium
CN116721038A (en) * 2023-08-07 2023-09-08 荣耀终端有限公司 Color correction method, electronic device, and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116559119A (en) * 2023-05-11 2023-08-08 东北林业大学 Deep learning-based wood dyeing color difference detection method, system and medium
CN116559119B (en) * 2023-05-11 2024-01-26 东北林业大学 Deep learning-based wood dyeing color difference detection method, system and medium
CN116721038A (en) * 2023-08-07 2023-09-08 荣耀终端有限公司 Color correction method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN115861119A (en) Rock slag image color cast correction method based on deep convolutional neural network
CN110728633B (en) Multi-exposure high-dynamic-range inverse tone mapping model construction method and device
US8494303B2 (en) Image processing apparatus and image processing method for adaptively processing an image using an enhanced image and pixel values of an original image
CN105046663B (en) A kind of adaptive enhancement method of low-illumination image for simulating human visual perception
CN106504212A (en) A kind of improved HSI spatial informations low-luminance color algorithm for image enhancement
CN102884536B (en) The colour of skin and feature detection for video conference compression
CN101901475B (en) High dynamic range image tone mapping method based on retina adaptive model
CN104539919B (en) Demosaicing method and device of image sensor
CN109218716B (en) No-reference tone mapping image quality evaluation method based on color statistics and information entropy
CN105225235A (en) A kind of video flame detecting method based on multispectral characteristic
CN115223004A (en) Method for generating confrontation network image enhancement based on improved multi-scale fusion
CN115393227B (en) Low-light full-color video image self-adaptive enhancement method and system based on deep learning
CN112508812A (en) Image color cast correction method, model training method, device and equipment
CN107862672A (en) The method and device of image defogging
CN115984535A (en) Low-illumination image judgment and image enhancement method for drilling operation site
CN113676629A (en) Image sensor, image acquisition device, image processing method and image processor
CN113824945B (en) Rapid automatic white balance and color correction method based on deep learning
CN111598789B (en) Sparse color sensor image reconstruction method based on deep learning
CN112200747A (en) Image processing method and device and computer readable storage medium
Naccari et al. Natural scenes classification for color enhancement
JP4719559B2 (en) Image quality improving apparatus and program
Chiang et al. Saturation adjustment method based on human vision with YCbCr color model characteristics and luminance changes
CN113242417B (en) Method for correcting standard deviation weighted color cast digital image
CN104112254A (en) Method and system for processing RGB color image
CN112907470A (en) Underwater image recovery method based on Lab color gamut transformation, classification and white balance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination