CN110675382A - Aluminum electrolysis superheat degree identification method based on CNN-LapseLM - Google Patents
Aluminum electrolysis superheat degree identification method based on CNN-LapseLM Download PDFInfo
- Publication number
- CN110675382A CN110675382A CN201910902794.3A CN201910902794A CN110675382A CN 110675382 A CN110675382 A CN 110675382A CN 201910902794 A CN201910902794 A CN 201910902794A CN 110675382 A CN110675382 A CN 110675382A
- Authority
- CN
- China
- Prior art keywords
- features
- image
- matrix
- cnn
- lapselm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 title claims abstract description 20
- 229910052782 aluminium Inorganic materials 0.000 title claims abstract description 20
- 238000005868 electrolysis reaction Methods 0.000 title claims abstract description 20
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 26
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 238000004519 manufacturing process Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000010606 normalization Methods 0.000 claims abstract description 3
- 239000011159 matrix material Substances 0.000 claims description 43
- 230000006870 function Effects 0.000 claims description 41
- 230000004913 activation Effects 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 15
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 210000002569 neuron Anatomy 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 210000004027 cell Anatomy 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 2
- 239000000284 extract Substances 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an aluminum electrolysis superheat degree identification method based on CNN feature fusion and semi-supervised Laplace extreme learning machine, comprising the following steps: step 1, collecting real-time production data of aluminum electrolysis, and carrying out normalization processing and standardization processing on the collected data; step 2, extracting the depth characteristics of the fire eye image in the aluminum electrolysis industrial process by using a Convolutional Neural Network (CNN); step 3, fusing the depth features of the fire hole image extracted in the step 2 with other features of the fire hole image; and 4, using a Laplace regularization structure semi-supervised extreme learning machine (LapseLM) as a classifier, and judging the state of the current superheat degree of the electrolytic cell according to the flame eye image.
Description
Technical Field
The invention relates to the field of industrial control, in particular to an aluminum electrolysis superheat degree identification method based on CNN feature fusion and a semi-supervised Laplace extreme learning machine (CNN-LapseLM).
Background
In the process of aluminum electrolysis industry, the degree of superheat is one of important indexes for evaluating the performance of an electrolytic cell, and the degree of superheat can reflect the current working condition of the electrolytic cell. However, how to accurately measure superheat is an unresolved challenge. The traditional manual measurement method is easily influenced by various factors, such as manual reading errors and the precision of a measuring instrument. The aluminum electrolysis industrial site is an environment with high temperature, high humidity and high concentration of corrosive gas, and the severe environment can also influence the precision of manual measurement and cause the damage of a measuring instrument.
In recent years, with the development of deep learning, convolutional neural networks have come to be widely used in various fields such as object detection, image recognition, voice recognition, and the like. The convolutional neural network can autonomously extract the depth features of the image, and the recognition accuracy of the image is very low only according to the depth features because the depth features cannot comprehensively reflect all information contained in the image. An Extreme Learning Machine (ELM) is a single-hidden layer feedforward neural network proposed by huang guang bin, and has a typical three-layer neural network structure: compared with the traditional method, the training process of the extreme learning machine is short in time consumption, the input weight and the bias of the hidden layer are randomly set according to manual experience, and the appropriate output weight can be obtained by solving a least square problem. Many research results show that extreme learning machines are more effective in classifying problems than other conventional classifiers, such as: support Vector Machines (SVM), Random forest (Random forest), Decision trees (Decision tree).
In order to overcome the defects of low accuracy and serious damage of instruments in the traditional manual measurement of the superheat degree, the method can extract the image characteristics of the fire hole collected on the aluminum electrolysis industrial site and judge the state of the superheat degree according to the characteristics, thereby not only saving time, but also improving the accuracy of superheat degree judgment. However, most of the acquired images of the fire holes are label-free, and when all the images of the fire holes are labeled manually, not only is waste of manpower and material resources caused, but also human errors exist. Therefore, according to the popular assumption and the smooth assumption in the field of semi-supervised learning, a Laplace regularization term is introduced to construct a Laplace extreme learning machine (LapseLM), and the LapseLM model can fully utilize all acquired images of the fire eyes.
Disclosure of Invention
The patent aims to provide a new superheat degree identification method (CNN-LapseLM) based on CNN feature fusion and semi-supervised Laplace extreme learning machine in order to fully utilize unlabeled fire hole images in the aluminum electrolysis industrial process. The method mainly comprises the steps of extracting the depth characteristic of a fire hole image through CNN, fusing the texture characteristic, the color characteristic and the gray level characteristic of the image, fully extracting image information, and predicting the state of the superheat degree of an electrolytic bath by using a LapseLM model.
The invention aims to solve the technical problems in the prior art. Therefore, the invention discloses an aluminum electrolysis superheat degree identification method based on CNN feature fusion and a semi-supervised Laplace extreme learning machine, which comprises the following steps:
step 1, collecting real-time production data of aluminum electrolysis, and carrying out normalization processing and standardization processing on the collected data;
step 2, extracting the depth characteristics of the fire eye image in the aluminum electrolysis industrial process by using a Convolutional Neural Network (CNN);
step 3, fusing the depth features of the fire hole image extracted in the step 2 with other features of the fire hole image;
and 4, using a Laplace regularization structure semi-supervised extreme learning machine (LapseLM) as a classifier, and judging the state of the current superheat degree of the electrolytic cell according to the flame eye image.
Still further, the other features include: color features, grayscale features, texture features.
Still further, the method using laplacian regularization further comprises: given a tagged data setAnd unlabeled datasetsObtaining a target regularization function:
wherein u isijRepresenting two samples xiAnd xjPairwise similarity between; u. ofijCalculated for the gaussian kernel:
uij=exp(-||xi-xj||2/2σ2) Or fixed as 1;
equation (1) is further converted to the following matrix form:
where tr (·) represents the driving trajectory of the matrix, L ═ D-U is the laplacian matrix of the graph, D is a diagonal matrix whose diagonal elements are: representing the predicted output value of sample x.
Still further, the extreme learning machine further comprises:
the mapping function is:
the constraint equation is:
where g (-) is a continuous activation function, a is the input weight from the input layer to the hidden layer, b is the bias of the hidden layer, and β is the output weight from the hidden layer to the output layer.
Further, the activation function is usually one of a gaussian activation function, a sigmoid activation function or a Tanh activation function.
Further, the objective function is a second order function, and the expression is:
wherein,is an activation function matrix; in formula (5), the first term is a regularization term for preventing overfitting, the second term is an error function term, C is called a penalty coefficient, and the target function (5) is differentiated to find a minimum value to obtain an output weight;
the derived formula (5) is:
there are two cases of the output weight: when the number of training samples is larger than the hidden layer neuron node, namely the hidden matrix is column full rank,when the number of training samples is less than the number of hidden layer neuron nodes, i.e. the hidden layer matrix is row full rank,
further, with LapsELM for superheat classification, equation (5) can be rewritten to a matrix form by fitting laplace regularization constraints to the objective function:
wherein,is an enhanced training target, the first l line and Y linelSimilarly, the remaining rows are set to 0;c is a diagonal matrix of (l + u) × (l + u) with diagonal elements in the first l rows: [ C ]]ii=CiThe remaining diagonal elements are 0; by setting the gradient of equation (7) to 0, the output weight of the CNN-LapsELM model can be calculated:
is nn×nhAnd when the number of the labeled sample data is less than the number of the nodes of the hidden layer, the output weight expression of the CNN-LapseLM model is as follows:
wherein, Il+uIs an identity matrix of dimension (l + u) × (l + u),andare enhancement matrices, which are equal to the first l rows of the g and Y matrices, respectively, and the remaining u row elements are all set to 0; λ is a balance parameter, when it is set to 0, the output weight expression of CNN-LapseLM model is changed to the output weight of traditional ELM model, the balance parameter is selected from { e } according to the error of verification data setxSelecting the most suitable numerical value from the index sequence of | × -7, -6, …,2,3 }.
Still further, the step 1 further comprises: and preprocessing the image of the fire hole, removing noise points on the original image of the fire hole by using a self-adaptive mean filtering algorithm, and extracting the edge of the area of the fire hole from the whole image.
Still further, the preprocessing of the flare image further includes: the image of the fire hole is divided into a plurality of non-overlapped subareas, and three apparent characteristics of the image are calculated: texture features, color features and gray level features, wherein the texture features comprise entropy and energy of an image; the color features mainly comprise the average value, the standard deviation and the pixel peak value of the color histogram of the fire eye image; the gray feature is the average gray value of the fire eye image, the corner feature and the histogram gray peak value. Furthermore, a CNN is used for extracting depth features, color features and gray features are extracted from the fire hole image, the superheat degree of the fire hole image is classified through LapseLM, and the fused features form a fused feature matrix:
Mfusion=[Mdeep,Mtexture,Mcolor,Mgray]
wherein M isdeepRepresenting depth features of CNN extraction, McolorRepresenting a color feature matrix, MtextureRepresenting a texture feature matrix, MgrayRepresenting a gray matrix extracted from the fire eye image.
Drawings
The invention will be further understood from the following description in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. In the drawings, like reference numerals designate corresponding parts throughout the different views.
FIG. 1 is a schematic diagram of a flare hole image in the process of processing the flare hole image, wherein (a) is an original flare hole image, (b) is a subregion segmentation image, and (c) is a grayed flare hole image;
FIG. 2 is a flow chart of a CNN-LapseLM algorithm in the aluminum electrolysis superheat degree identification method based on CNN feature fusion and semi-supervised Laplace extreme learning machine of the present invention;
FIG. 3 is a graph comparing the performance of marked and unmarked images of a flare in accordance with the present invention, wherein M1Representing a characteristic of the colour, M2Representing a textural feature, M3Representing a grayscale feature;
FIG. 4 is a comparison histogram of evaluation indicators for different algorithms according to the present invention.
Detailed Description
In order to make the content and purpose of the invention clearer and clearer. A detailed description will be given below of a specific implementation of the aluminum electrolysis superheat degree identification method based on CNN feature fusion and semi-supervised laplacian extreme learning machine.
Example one
1. Description and preprocessing of data sets
In the aluminum electrolysis industrial process, a flare image is collected by an operator using an industrial camera device. Due to the influence of industrial environment and physical equipment, some noise and interference exist in the acquired images of the fire holes. Therefore, preprocessing the image of the flare is crucial. Here, an adaptive mean filtering algorithm is used to remove noise points on the original flare image to eliminate the influence of noise on the superheat state classification. In addition, because all the acquired images of the fire holes are not fire hole parts, in order to avoid interference of other parts on the recognition result, improve the processing efficiency of the fire hole images and reduce the memory burden when the CNN extracts the features, the fire hole images need to be preprocessed, and the edges of the fire hole holes are extracted from the whole images.
The experiment used 1200 images, 200 of which were labeled data and the remainder unlabeled data. All tagged data is tagged by experienced experts or professional operators to ensure the correctness of the tags. Randomly selecting 100 labeled data and 1000 unlabeled fire eye images as training data sets, using the remaining 100 labeled data as test data sets, and performing a CNN-LapseLM experiment by adopting a four-fold cross-validation method.
2. Feature calculation
The fire hole image can obtain clearer enhanced gray level image through preprocessing, so that the calculation burden of feature extraction in the model training process is reduced, the fire hole image is divided into a plurality of non-overlapped sub-regions, the extraction of image features is facilitated, and three apparent features of the image are calculated: texture features, color features, grayscale features. The texture features comprise entropy and energy of the image; the color features mainly comprise the average value, the standard deviation and the pixel peak value of the color histogram of the fire eye image; the grayscale characteristics are the average grayscale value of the fire eye image, the corner characteristics, and the histogram grayscale peak, and the specific details are shown in table 1.
CNN is used to extract depth features of an image, which is a normalized 48 × 48-dimensional image of a fire hole, and finally a 256-dimensional depth feature matrix is extracted from the fire hole image.
TABLE 1 characteristic types for flare eye image extraction
3. Feature fusion
The main idea of the superheat degree identification method based on the CNN-LapseLM is to extract depth features by using the CNN, extract color features and gray features from a fire hole image and then classify the superheat degree by using the LapseLM. The fused features form a fused feature matrix:
Mfusion=[Mdeep,Mtexture,Mcolor,Mgray]
wherein M isdeepRepresenting depth features of CNN extraction, McolorRepresenting a color feature matrix, MtextureRepresenting a texture feature matrix, MgrayRepresenting a gray matrix extracted from the fire eye image.
Example two
1. Laplace regularization:
in general, the model recognition accuracy rate trained by a small number of labeled training sample sets cannot reach the expected value. In order to solve the problem of few labeled training samples and improve the performance of the model, a semi-supervised learning method is provided, and a labeled data set is givenAnd unlabeled datasetsThe method based on Laplace semi-supervised learning can fully extract the geometric beliefs contained in all the available dataInformation distribution, semi-supervised learning methods are based on the following two basic assumptions:
(1) all having labels XlAnd unlabeled data XuAre all extracted from the same edge profile.
(2) If two sample points x1And x2The distributions are very close to each other, then the corresponding conditional probability P (y | x)1) And P (y | x)2) Should be very similar.
With these two assumptions, the target regularization function can be derived:
where u isijRepresenting two samples xiAnd xjPairwise similarities between. In general, uijCalculated from the gaussian kernel function: u. ofij=exp(-||xi-xj||2/2σ2) Or directly fixed to 1. According to the correlation study, (1) can be converted into the following matrix form:
tr (-) represents the driving trajectory of the matrix. L-D-U is the graph laplacian matrix, D is a diagonal matrix whose diagonal elements are: representing the predicted output value of sample x.
2. Extreme learning machine
An extreme learning machine is proposed by huang guang bin and a method for efficiently calculating a single-hidden-layer feedforward neural network, and generally a mapping function of the extreme learning machine is as follows:
its constraint equation is:
its global approximability can be guaranteed by a continuous activation function g (·).
In general, the training process of the extreme learning machine includes two stages, the first stage is to give a continuous activation function g (-) and input weights a from the input layer to the hidden layer, and bias b of the hidden layer, wherein the most commonly used activation functions are gaussian activation function, sigmoid activation function, and Tanh activation function. The second stage is to solve the output weights β from the hidden layer to the output layer. In the ELM theory, the objective function is usually a second-order function, and the expression is:
wherein,is an activation function matrix. In equation (5), the first term is the regularization term, the main effect is to prevent overfitting, the second term is the error function term, and C is called the penalty factor. The output weight can be obtained by deriving the objective function (5) to find the minimum value. The derived equation (5) is:
there are two cases of output weights: when the number of training samples is larger than the hidden layer neuron node, namely the hidden matrix is column full rank,when the number of training samples is less than the number of hidden layer neuron nodes, i.e. the hidden layer matrix is row full rank,
3.CNN-LapsELM
the aluminum electrolysis superheat degree identification method based on convolutional neural network feature fusion and semi-supervised Laplace extreme learning machine is characterized in that the idea is to extract depth features of images by using CNN, fuse the depth features with other apparent features, and then classify superheat degree by using LapseLM. By substituting the laplacian regularization constraint into the objective function, equation (5) can be rewritten to a matrix form:
wherein,is an enhanced training target, the first l line and Y linelSimilarly, the remaining rows are set to 0. C is a diagonal matrix of (l + u) × (l + u) with diagonal elements in the first l rows: [ C ]]ii=CiAnd the remaining diagonal elements are 0. By setting the gradient of equation (7) to 0, the output weight of the CNN-LapsELM model can be calculated:
is nn×nhAnd when the number of the labeled sample data is less than the number of the nodes of the hidden layer, the output weight expression of the CNN-LapseLM model is as follows:
wherein, Il+uIs an identity matrix of dimension (l + u) × (l + u),andare enhancement matrices which are equal to the first l rows of the two matrices g and Y, respectively, and the remaining u row elements are all set to 0.λ is called a balance parameter, and when it is set to 0, the output weight expression of the CNN-LapseLM model is changed into the output weight of the conventional ELM model. The selection of the trade-off parameter is based on the error of the validation data set, from { e }xAnd selecting the most suitable parameter from the index sequence of | (x) ═ 7, -6, …,2,3 }.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications may be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
Claims (10)
1. An aluminum electrolysis superheat degree identification method based on CNN feature fusion and semi-supervised Laplace extreme learning machine is characterized by comprising the following steps:
step 1, collecting real-time production data of aluminum electrolysis, and carrying out normalization processing and standardization processing on the collected data;
step 2, extracting the depth characteristics of the fire eye image in the aluminum electrolysis industrial process by using a Convolutional Neural Network (CNN);
step 3, fusing the depth features of the fire hole image extracted in the step 2 with other features of the fire hole image;
and 4, using a Laplace regularization structure semi-supervised extreme learning machine (LapseLM) as a classifier, and judging the state of the current superheat degree of the electrolytic cell according to the flame eye image.
2. The method of claim 1, wherein the other features comprise: color features, grayscale features, texture features.
3. The method of claim 1, in which the method using Laplace regularization further comprises: given a tagged data setAnd unlabeled datasetsObtaining a target regularization function:
wherein u isijRepresenting two samples xiAnd xjPairwise similarity between; u. ofijCalculated for the gaussian kernel: u. ofij=exp(-||xi-xj||2/2σ2) Or fixed as 1;
equation (1) is further converted to the following matrix form:
4. The method of claim 1, wherein the extreme learning machine further comprises:
the mapping function is:
the constraint equation is:
where g (-) is a continuous activation function, a is the input weight from the input layer to the hidden layer, b is the bias of the hidden layer, and β is the output weight from the hidden layer to the output layer.
5. The method of claim 4, wherein the activation function is one of a Gaussian activation function, a sigmoid activation function, or a Tanh activation function.
6. The method of claim 4, wherein the objective function is a second order function expressed as:
wherein,is an activation function matrix; in formula (5), the first term is a regularization term for preventing overfitting, the second term is an error function term, C is called a penalty coefficient, and the target function (5) is differentiated to find a minimum value to obtain an output weight;
the derived formula (5) is:
▽ΓELM=β+CgT(Y-gβ)=0 (6)
there are two cases of the output weight: when the number of training samples is larger than the hidden layer neuron node, namely the hidden matrix is column full rank,when the number of training samples is less than the number of hidden layer neuron nodes, i.e. the hidden layer matrix is row full rank,
7. the method of claim 1, wherein the degree of superheat is classified using LapsELM, and equation (5) is rewritten to matrix form by fitting a laplacian regularization constraint to the objective function:
wherein,is an enhanced training target, the first l line and Y linelSimilarly, the remaining rows are set to 0; c is a diagonal matrix of (l + u) × (l + u) with diagonal elements in the first l rows: [ C ]]ii=CiThe remaining diagonal elements are 0; by setting the gradient of equation (7) to 0, the output weight of the CNN-LapsELM model can be calculated:
is nn×nhAnd when the number of the labeled sample data is less than the number of the nodes of the hidden layer, the output weight expression of the CNN-LapseLM model is as follows:
wherein, Il+uIs an identity matrix of dimension (l + u) × (l + u),andare enhancement matrices, which are equal to the first l rows of the g and Y matrices, respectively, and the remaining u row elements are all set to 0; λ is a balance parameter, when it is set to 0, the output weight expression of CNN-LapseLM model is changed to the output weight of traditional ELM model, the balance parameter is selected from { e } according to the error of verification data setxAnd selecting the most suitable parameter from the index sequence of | (x) ═ 7, -6, …,2,3 }.
8. The method of claim 1, wherein step 1 further comprises: and preprocessing the fire hole image, removing noise points on the original fire hole image by using a self-adaptive mean filtering algorithm, and extracting the edge of a fire hole from the whole image.
9. The method of claim 8, wherein the pre-processing of the flare image further comprises: the image of the fire hole is divided into a plurality of non-overlapped subareas, and three apparent characteristics of the image are calculated: texture features, color features and gray level features, wherein the texture features comprise entropy and energy of an image; the color features mainly comprise the average value, the standard deviation and the pixel peak value of the color histogram of the fire eye image; the gray feature is the average gray value of the fire eye image, the corner feature and the histogram gray peak value.
10. The method of any one of claims 1-9, wherein step 3 further comprises: the method comprises the following steps of extracting depth features by using CNN, extracting color features, gray features and texture features from a fire hole image, classifying the superheat degree of the fire hole image by using LapseLM, and forming a fusion feature matrix by fused features:
Mfusion=[Mdeep,Mtexture,Mcolor,Mgray]
wherein M isdeepRepresenting depth features of CNN extraction, McolorRepresenting a color feature matrix, MtextureRepresenting a texture feature matrix, MgrayRepresenting a gray matrix extracted from the fire eye image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910902794.3A CN110675382A (en) | 2019-09-24 | 2019-09-24 | Aluminum electrolysis superheat degree identification method based on CNN-LapseLM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910902794.3A CN110675382A (en) | 2019-09-24 | 2019-09-24 | Aluminum electrolysis superheat degree identification method based on CNN-LapseLM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110675382A true CN110675382A (en) | 2020-01-10 |
Family
ID=69077390
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910902794.3A Pending CN110675382A (en) | 2019-09-24 | 2019-09-24 | Aluminum electrolysis superheat degree identification method based on CNN-LapseLM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110675382A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429005A (en) * | 2020-03-24 | 2020-07-17 | 淮南师范学院 | Teaching assessment method based on feedback of a small number of students |
CN114959797A (en) * | 2022-07-04 | 2022-08-30 | 广东技术师范大学 | Aluminum electrolysis cell condition diagnosis method based on data amplification and SSKELM |
CN118015338A (en) * | 2024-01-12 | 2024-05-10 | 中南大学 | Physical knowledge embedded aluminum electrolysis superheat degree identification method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100246980A1 (en) * | 2009-03-31 | 2010-09-30 | General Electric Company | System and method for automatic landmark labeling with minimal supervision |
CN106815576A (en) * | 2017-01-20 | 2017-06-09 | 中国海洋大学 | Target tracking method based on consecutive hours sky confidence map and semi-supervised extreme learning machine |
CN107423762A (en) * | 2017-07-26 | 2017-12-01 | 江南大学 | Semi-supervised fingerprinting localization algorithm based on manifold regularization |
CN109598709A (en) * | 2018-11-29 | 2019-04-09 | 东北大学 | Mammary gland assistant diagnosis system and method based on fusion depth characteristic |
-
2019
- 2019-09-24 CN CN201910902794.3A patent/CN110675382A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100246980A1 (en) * | 2009-03-31 | 2010-09-30 | General Electric Company | System and method for automatic landmark labeling with minimal supervision |
CN106815576A (en) * | 2017-01-20 | 2017-06-09 | 中国海洋大学 | Target tracking method based on consecutive hours sky confidence map and semi-supervised extreme learning machine |
CN107423762A (en) * | 2017-07-26 | 2017-12-01 | 江南大学 | Semi-supervised fingerprinting localization algorithm based on manifold regularization |
CN109598709A (en) * | 2018-11-29 | 2019-04-09 | 东北大学 | Mammary gland assistant diagnosis system and method based on fusion depth characteristic |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429005A (en) * | 2020-03-24 | 2020-07-17 | 淮南师范学院 | Teaching assessment method based on feedback of a small number of students |
CN111429005B (en) * | 2020-03-24 | 2023-06-02 | 淮南师范学院 | Teaching evaluation method based on small amount of student feedback |
CN114959797A (en) * | 2022-07-04 | 2022-08-30 | 广东技术师范大学 | Aluminum electrolysis cell condition diagnosis method based on data amplification and SSKELM |
CN118015338A (en) * | 2024-01-12 | 2024-05-10 | 中南大学 | Physical knowledge embedded aluminum electrolysis superheat degree identification method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110070141B (en) | Network intrusion detection method | |
CN106203523B (en) | The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient | |
CN104504366A (en) | System and method for smiling face recognition based on optical flow features | |
CN108629326A (en) | The action behavior recognition methods of objective body and device | |
CN110675382A (en) | Aluminum electrolysis superheat degree identification method based on CNN-LapseLM | |
JP6897749B2 (en) | Learning methods, learning systems, and learning programs | |
CN112200121A (en) | Hyperspectral unknown target detection method based on EVM and deep learning | |
Carrara et al. | On the robustness to adversarial examples of neural ode image classifiers | |
CN109344720B (en) | Emotional state detection method based on self-adaptive feature selection | |
CN113837266A (en) | Software defect prediction method based on feature extraction and Stacking ensemble learning | |
CN112465821A (en) | Multi-scale pest image detection method based on boundary key point perception | |
CN113313678A (en) | Automatic sperm morphology analysis method based on multi-scale feature fusion | |
CN117772641A (en) | Material enhancement feature recognition picking method of color selector | |
CN117253192A (en) | Intelligent system and method for silkworm breeding | |
CN115953584B (en) | End-to-end target detection method and system with learning sparsity | |
CN115588124B (en) | Fine granularity classification denoising training method based on soft label cross entropy tracking | |
CN116935125A (en) | Noise data set target detection method realized through weak supervision | |
CN114495220A (en) | Target identity recognition method, device and storage medium | |
CN111860441B (en) | Video target identification method based on unbiased depth migration learning | |
Patel et al. | Enhanced CNN for Fruit Disease Detection and Grading Classification Using SSDAE-SVM for Postharvest Fruits | |
Happold | Structured forest edge detectors for improved eyelid and Iris segmentation | |
CN111723719A (en) | Video target detection method, system and device based on category external memory | |
Jayashree et al. | Plant Leaf Disease Detection Using Resnet-50 Based on Deep Learning | |
CN111191575A (en) | Naked flame detection method and system based on flame jumping modeling | |
Paterega et al. | Imbalanced data: a comparative analysis of classification enhancements using augmented data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200110 |
|
RJ01 | Rejection of invention patent application after publication |