CN113570129A - Method for predicting strip steel pickling concentration and computer readable storage medium - Google Patents

Method for predicting strip steel pickling concentration and computer readable storage medium Download PDF

Info

Publication number
CN113570129A
CN113570129A CN202110817426.6A CN202110817426A CN113570129A CN 113570129 A CN113570129 A CN 113570129A CN 202110817426 A CN202110817426 A CN 202110817426A CN 113570129 A CN113570129 A CN 113570129A
Authority
CN
China
Prior art keywords
time
concentration
trained
data
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110817426.6A
Other languages
Chinese (zh)
Inventor
黎友华
陈建良
刘鑫
何可
杨辉
刘毅敏
高炎
杨永立
罗万钊
钟实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Iron and Steel Co Ltd
Original Assignee
Wuhan Iron and Steel Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Iron and Steel Co Ltd filed Critical Wuhan Iron and Steel Co Ltd
Priority to CN202110817426.6A priority Critical patent/CN113570129A/en
Publication of CN113570129A publication Critical patent/CN113570129A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a method for predicting strip steel pickling concentration and a computer readable storage medium, wherein the method comprises the following steps: acquiring data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, wherein the data to be analyzed comprises solution temperature, solution density and solution conductivity; and calling a pickling concentration prediction model to carry out concentration prediction on the data to be analyzed to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution. By adopting the method and the device, the problems of poor acid concentration measurement performance, low precision and the like in the prior art can be solved.

Description

Method for predicting strip steel pickling concentration and computer readable storage medium
Technical Field
The invention relates to the technical field of acid concentration prediction, in particular to a method for predicting strip steel pickling concentration and a computer readable storage medium.
Background
The steel industry is a pillar of national economy, and in the rapid development of the national economy and the promotion of the industrialization process, the development of the steel industry has the problems of high energy consumption, excessive capacity, unreasonable industrial structure and the like. With the development of science and technology, the development of new products and the actual demands of people are gradually increased, the market also puts higher requirements on the surface quality of steel, and the oxide skin and the rust on the surface of the steel are removed by acid cleaning in production. In order to improve the pickling effect, the concentration of the acid must be controlled within a tolerance range during the pickling process.
Currently, there are two main ways to measure acid concentration: online measurement and offline measurement. In practice, the acid concentration measurement is found to have poor performance and reduced accuracy no matter the measurement is carried out on line or off line. Therefore, there is a need to propose a better acid concentration measurement scheme.
Disclosure of Invention
The embodiment of the application solves the problems of poor acid concentration measurement performance, low precision and the like in the prior art by providing a prediction scheme of the strip steel pickling concentration.
In one aspect, the present application provides a method for predicting a strip steel pickling concentration, including:
acquiring data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, wherein the data to be analyzed comprises solution temperature, solution density and solution conductivity;
calling a pickling concentration prediction model to carry out concentration prediction on the data to be analyzed to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution;
the acid washing concentration prediction model is a model which is trained on a neural network model to be trained, including an elastic network regularization term and a gated circulation unit GRU network in advance.
Optionally, before the invoking of the acid washing concentration prediction model to perform the concentration prediction on the data to be analyzed, the method further includes:
acquiring historical data of the cold-rolled strip steel pickling solution at n different moments, wherein n is a positive integer;
and training the neural network model to be trained including an elastic network regularization item and a gated circulation unit GRU network by using the historical data so as to obtain the pickling concentration prediction model.
Optionally, the gated cyclic unit GRU network is deployed in a hidden layer of the neural network model to be trained, and the GRU network is configured to:
calculating first memory information of the hidden layer at the time t according to historical data at the time t, memory reservation information output by the GRU network at the time (t-1) and a preset first weight parameter, wherein t is a positive integer not exceeding n;
calculating second memory information of the hidden layer at the t moment according to the historical data at the t moment, the memory reservation information output by the GRU network at the (t-1) moment and a preset second weight parameter;
calculating the current memory information of the hidden layer at the time t according to the second memory information, the memory reservation information output by the GRU network at the time (t-1) and a preset third weight parameter;
and calculating the memory retention information at the t moment output by the GRU network according to the first memory information, the current memory information and the memory retention information at the (t-1) moment output by the GRU network.
Optionally, the first memory information at time t is:
zt=σ(Wz·[xt,ht-1])
wherein z istIs the first memory information at the time t, xtIs the historical data at the time t, ht-1Retaining information for the memory of the GRU network output at time (t-1), WzFor the first weight parameter, σ is an activation function.
Optionally, the second memory information at time t is:
rt=σ(Wr·[xt,ht-1])
wherein r istIs the second memory information at the time t, xtIs the historical data at the time t, ht-1Retaining information for the memory of the GRU network output at time (t-1), WrFor the second weight parameter, σ is an activation function.
Optionally, the current memory information at the time t is:
Figure BDA0003170690580000031
wherein the content of the first and second substances,
Figure BDA0003170690580000032
is the current memory information at the time t, rtIs the second memory information of said time t, xtIs the historical data at the time t, ht-1And reserving information for the memory of the GRU network output at the (t-1) moment, wherein W is the third weight parameter, and tanh is a hyperbolic tangent activation function.
Optionally, the memory reservation information output by the GRU network at time t is:
Figure BDA0003170690580000033
wherein h istRetaining information for the memory of the GRU network output at time t, ztIs the first memory information of the time t, ht-1Information is reserved for the memory at time (t-1) output by the GRU network,
Figure BDA0003170690580000034
and the current memory information at the time t is obtained.
Optionally, the neural network model to be trained further includes an output layer, configured to:
and calculating the predicted acid washing concentration value at the t moment output by the neural network model to be trained according to the memory retention information at the t moment output by the hidden layer.
Optionally, the predicted value of the pickling concentration at the time t is:
yt=σ(woht)
wherein, ytThe predicted value of the acid washing concentration at the time t, w, output by the neural network to be trainedoIs a preset fourth weight parameter, htAnd reserving information for the memory of the t moment output by the hidden layer.
Optionally, the elastic network regularization term is used to perform regularization processing on the weight parameters in the neural network model to be trained by using an objective optimization function, where the objective optimization function is:
Figure BDA0003170690580000041
wherein, ytjIs the real value of the pickling concentration at the time t,
Figure BDA0003170690580000042
is the predicted value of the acid cleaning concentration at the time t, m is the number of GRUs and lambda in the neural network model to be trained1And λ2Adjusting the parameters for preset regularization, w1And w2And the weight parameters are weight parameters in the neural network model to be trained.
Optionally, the acid washing concentration prediction model is obtained by updating the weight parameters in the neural network model to be trained by using a loss function, where the loss function is:
Figure BDA0003170690580000043
therein, LossMAEAs a function of said loss, yiThe predicted value y of the acid washing concentration output by the neural network model to be trainedrealAnd the real value of the pickling concentration in the historical data is obtained.
Optionally, the obtaining the historical data of the cold-rolled steel strip pickling solution at n different times includes:
acquiring initial data of the cold-rolled strip steel pickling solution at n different moments;
and carrying out normalization processing on the n initial data at different moments to obtain n historical data at different moments.
Optionally, the historical data is:
Figure BDA0003170690580000044
wherein, XtnormFor said historical data at time t, XtFor said initial data at time t, XmaxIs the largest initial data, X, among the initial data of the n momentsminThe minimum initial data in the initial data of the n time instants is obtained.
In another aspect, the present application provides a device for predicting strip steel pickling concentration through an embodiment of the present application, where the device includes an obtaining module and a predicting module, where:
the acquisition module is used for acquiring data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, wherein the data to be analyzed comprises solution temperature, solution density and solution conductivity;
the prediction module is used for calling a pickling concentration prediction model to carry out concentration prediction on the data to be analyzed to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution;
the acid washing concentration prediction model is a model which is trained on a neural network model to be trained, including an elastic network regularization term and a gated circulation unit GRU network in advance.
In another aspect, the present application provides a terminal device through an embodiment of the present application, where the terminal device includes a processor, a memory, a communication interface, and a bus; the processor, the memory and the communication interface are connected through the bus and complete mutual communication; the memory stores executable program code; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to be used for the prediction method of the strip steel pickling concentration provided above.
In another aspect, the present application provides a computer-readable storage medium through an embodiment of the present application, including computer instructions, which, when executed on a terminal device, cause the terminal device to perform a method for predicting a strip pickling concentration as provided above.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages: according to the method, the data to be analyzed of the cold-rolled strip steel pickling solution at the current moment are obtained, then a pickling concentration prediction model is directly called to carry out concentration prediction on the data to be analyzed, and a pickling concentration prediction value of the cold-rolled strip steel pickling solution is obtained; the acid concentration prediction model is a neural network model with high prediction precision, so that the operation flow or steps of concentration prediction can be saved on the premise of ensuring the acid concentration prediction precision, the prediction performance and convenience of acid concentration prediction can be improved, and the automatic, online and convenient prediction of the acid concentration is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for predicting the pickling concentration of strip steel provided by an embodiment of the application.
Fig. 2 is a schematic structural diagram of a pickling concentration prediction model (or a neural network model to be trained) provided in an embodiment of the present application.
Fig. 3 is a schematic diagram of an internal computing structure of a gated round unit GRU network according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an acid concentration prediction effect provided in an embodiment of the present application.
Fig. 5 is a schematic diagram of an acid concentration prediction error provided in an embodiment of the present application.
FIG. 6 is a schematic structural diagram of a device for predicting strip steel pickling concentration according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The applicant has also found in the course of the present application that: at present, the acid concentration measurement is mainly on-line measurement and off-line measurement. The off-line measurement requires manual sampling and analysis of the acid concentration by titration, thereby achieving control of the acid concentration. The off-line measurement method has the hysteresis of measurement and control, namely the measurement performance is poor, the automation process of a steel factory is greatly influenced, and the pickling quality of the strip steel cannot be guaranteed.
The on-line measurement can be classified into direct measurement and soft measurement. The equipment cost and the later maintenance cost required by direct measurement are high, and the large-scale use in a steel factory is not facilitated. While soft measurements require only relevant data and computer knowledge to predict acid concentration. In the actual measurement process, the process parameters in the pickling process are detected, and a prediction model between the parameters and the acid concentration is established by using a statistical modeling method, so that the acid concentration is predicted.
The embodiment of the application provides a method for predicting the pickling concentration of the strip steel, and solves the technical problems of poor acid concentration measurement performance, low precision and the like in the prior art.
In order to solve the technical problems, the general idea of the embodiment of the application is as follows: acquiring data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, wherein the data to be analyzed comprises solution temperature, solution density and solution conductivity; calling a pickling concentration prediction model to carry out concentration prediction on the data to be analyzed to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution; the acid washing concentration prediction model is a model which is trained on a neural network model to be trained, including an elastic network regularization term and a gated circulation unit GRU network in advance.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
First, it is stated that the term "and/or" appearing herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 is a schematic flow chart of a method for predicting the pickling concentration of a strip steel according to an embodiment of the present disclosure. The method as described in figure 1 comprises the following implementation steps:
s101, obtaining data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, wherein the data to be analyzed comprise solution temperature, solution density and solution conductivity.
The method can obtain corresponding data to be analyzed in the cold-rolled steel strip pickling solution, wherein the data to be analyzed comprises but is not limited to the solution temperature T, the solution density D and the solution conductivity C of the cold-rolled steel strip pickling solution, or other parameter data and the like.
Optionally, the data to be analyzed is solution data after normalization, that is, the solution data of the cold-rolled strip steel pickling solution at the current moment can be obtained first, and then normalization processing is performed on the solution data, so that the normalized data to be analyzed is obtained. The data normalization process will be described in detail below, and will not be described here.
S102, calling a pickling concentration prediction model to carry out concentration prediction on the data to be analyzed to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution. The acid washing concentration prediction model is a model which is trained on a neural network model to be trained, including an elastic network regularization term and a gated circulation unit GRU network.
In an embodiment, the method can obtain historical data of the cold-rolled strip steel pickling solution at n different moments, and then train and optimize a neural network to be trained including an elastic network regularization term and a gated circulation Unit (GRU) network by using the historical data at the n different moments to obtain the pickling concentration prediction model.
In one embodiment, the present application may first obtain initial data (also referred to as source data) of the cold-rolled steel strip pickling solution at n different times, where the initial data includes, but is not limited to, parameters such as solution temperature T, solution density D, and conductivity C, which are detected by an in-situ sensor, for example, the solution temperature is detected by a temperature sensor. Furthermore, the method and the device can perform normalization processing on the acquired n initial data at different moments to obtain n historical data at different moments. Taking the initial data of the nth time as an example, it can be expressed as Xn=[Tn,Dn,Cn]Wherein T isnThe solution temperature of the solution at that time, DnThe solution density of the solution at this time, CnThe solution conductivity of the solution at this time.
Specifically, the model is constructed based on a Keras library, and the Keras library provides a common GRU neural network for neural network model training. In general, the format of data is normative in the input of the model, so the data needs to be standardized before training the model, and the aim is to map the range of the data into the interval of 0-1. In the present application, the data is normalized by the minimum-maximum quantization (minmaxscale) normalization method, and the normalization (normalization) is specifically calculated as shown in the following formula (1):
Figure BDA0003170690580000081
where X is the input sequence (initial data at a certain time), XtFor said initial data at time t, XmaxIs the largest initial data (i.e. the maximum value in the data set) among the initial data of the n time instants, XminThe minimum initial data (i.e. the minimum value in the data set) in the initial data of the n time instants. XtnormThe history data at time t (i.e. normalized history data).
In an optional embodiment, the normalized n pieces of historical data at different time instants are used as a sample set for model construction. And then according to a certain preset proportion, dividing the sample set into a training set and a testing set. For example, all historical data is divided into corresponding training and test sets on a 9:1 scale. The historical data in the training set is used for model training, the historical data in the testing set is used for checking the learning effect of the model, and whether the model prediction result needs to iteratively correct the model parameters or not is evaluated.
In an embodiment, please refer to fig. 2 to show an internal specific structural diagram of a model designed by the present application in a process of building or training a neural network model to be trained. The model shown in fig. 2 may be specifically a neural network model to be trained or a pickling concentration prediction model, and the model shown in the figure includes three levels: an Input Layer 201(Input Layer), a Hidden Layer 202(Hidden Layer), and an Output Layer 203(Output Layer). Wherein:
the input layer 201 controls the data format of the input model, and the hidden layer 202 includes a GRU network, in which a plurality of GRU unit structures are deployed, 3 of which are shown as examples, but are not limited thereto, and which can be specifically adjusted and set according to actual requirements. The hidden layer 202 is the core of the overall prediction structure. The output layer 203 is a fully connected layer to obtain a predicted value for the next state or time.
The input layer 201 of the model comprises sensor values such as solution temperature T, solution density D and solution conductivity C acquired in the historical production stage, and after the data are organized by taking time step as an index, the data are used as a sample set X for model learning, and meanwhile, the acid concentration (such as hydrochloric acid or sulfuric acid) H is used as a target value of the whole GRU prediction network.
The GRU network of the hidden layer 202 predicts a learning sample set of the input layer, which may specifically be according to the current input Xi=[Ti,Di,Ci]And input X of the first N states thereofi-N,Xi-N+1,...,Xi-1Predicting the acid concentration of the next state or states
Figure BDA0003170690580000091
Then predicting the acid concentration
Figure BDA0003170690580000092
With the actual (or actual) value H of the acid concentration collectedi+1Mean absolute error MAE therebetween as Loss function Loss during learningMAE. And then, optimizing the parameters of the model in each iteration process by using an Adaptive gradient estimation (Adam) algorithm until the loss function is converged. In the parameter optimization process, an elastic network regularization term (also referred to as an elastic network for short) may be additionally deployed in the hidden layer 202, that is, the elastic network is introduced on the basis of the three hidden layers of the graphic design at the same time, and regularization processing is performed on the weight parameters in the GRU network, so as to improve the generalization performance of the GRU network. Fig. 3 is a schematic diagram illustrating an internal computing structure of a GRU network.
In particular implementations, the GRU networks referred to herein are variants of Long Short Term Memory (LSTM) neural networks. GRU network update gate ztAnd a reset gate rtAnd (4) forming. Each GRU has a current input x in its input-output structuretAnd the hidden state h passed by the previous nodet-1(which may be understood as the memory information retained in the hidden layer at the previous time), this hidden state includes information about the previous node. Binding of xtAnd ht-1The GRU will get the output y of the current hidden nodetAnd a hidden state h passed to the next nodet. With reference to fig. 3, the operation principle of the GRU unit structure is as follows:
1. at time step t (i.e., at time t), the update gate is first calculated to determine the memory information in the state of the last hidden layer. In other words, the first memory information of the hidden layer at the time t can be calculated through the update gate, specifically, the first memory information of the hidden layer at the time t can be calculated according to the historical data at the time t, the memory retention information at the time (t-1) output by the GRU network and the preset first weight parameter, and the update gate z is used for updating the first memory information of the hidden layer at the time ttThe specific calculation expression of (2) is as shown in the following formula:
zt=σ(Wz·[xt,ht-1]) Formula (2)
Wherein z istIs the first memory information at the time t, xtIs the historical data at the time t, ht-1Retaining information for the memory of the GRU network output at time (t-1), WzAnd sigma is a sigmoid activation function for the first weight parameter. This application may relate to xtAnd ht-1After linear transformation, the transformed vector is combined with a weight matrix (also called a first weight parameter) WzMultiplication. The updating gate has the function of adding the two parts of information and bringing the two parts of information into a sigmoid activation function, and compressing the result to be between 0 and 1.
2. Reset gate rtMainly, the forgetting information in the hidden layer at the last moment is determined. Specifically, the second memory information of the hidden layer at the time t is calculated through a reset gate according to the historical data at the time t, the memory retention information output by the GRU network at the time (t-1) and a preset second weight parameter, and the reset gate rtThe specific calculation expression of (2) is shown as the following formula (3):
rt=σ(Wr·[xt,ht-1]) Formula (3)
Wherein r istIs the second memory information at the time t, xtIs the historical data at the time t, ht-1Retaining information for the memory of the GRU network output at time (t-1), WrFor the second weight parameter, σ is activation functionAnd (4) counting. The application can also use xtAnd ht-1After linear transformation, the transformed vector is combined with a weight matrix (also called a second weight parameter) WrMultiplication. The reset gate is used for adding the two pieces of information and bringing the two pieces of information into a sigmoid activation function so as to output corresponding activation values.
3. The reset gate may also store past related information to determine the current memory content/information. Specifically, the current memory information of the hidden layer at the time t may be calculated according to the second memory information, the memory reservation information output by the GRU network at the time (t-1), and a preset third weight parameter, and the current memory information of the hidden layer at the time t is specifically calculated as shown in the following formula (4):
Figure BDA0003170690580000111
wherein the content of the first and second substances,
Figure BDA0003170690580000112
is the current memory information at the time t, rtIs the second memory information of said time t, xtIs the historical data at the time t, ht-1And reserving information for the memory of the GRU network output at the (t-1) moment, wherein W is the third weight parameter, and tanh is a hyperbolic tangent activation function. The application can also calculate rtAnd ht-1Then at a product with xtMultiplied by a weight matrix W through a linear transformation. Because the reset gate obtains a vector consisting of 0 to 1, the reset gate determines the gating opening size, and forgets the element of the gate control being 0, so as to determine the previous information needing to be reserved and forgotten; and then the two parts of information are added and brought into the hyperbolic tangent activation function.
4. The GRU can also determine the information h reserved by the hidden layer at the current momentt. Specifically, the memory reservation information at the time t output by the GRU network may be calculated according to the first memory information, the current memory information, and the memory reservation information at the time (t-1) output by the GRU network, and the specific calculation is as shown in the following formula (5):
Figure BDA0003170690580000113
wherein h istRetaining information for the memory of the GRU network output at time t, ztIs the first memory information of the time t, ht-1Information is reserved for the memory at time (t-1) output by the GRU network,
Figure BDA0003170690580000114
and the current memory information at the time t is obtained. In this application (1-z)t) For updating the result of the activation of the gate, which likewise controls the input of information in the form of a gate control, (1-z)t) And ht-1The product of (a) and (b) represents the information that is saved to the final memory in the previous step (i.e. the last moment), and the information that is saved to the final memory in the current memory is the content information h outputted by the final gated loop unit GRUt
It should be noted that the sigmoid activation function and the tanh activation function referred to above in the present application are defined as shown in the following equations (6) and (7), respectively:
Figure BDA0003170690580000121
Figure BDA0003170690580000122
in practical applications, the number of GRUs (or the number of hidden layers corresponding to the GRUs) deployed in the GRU network is not limited, and fig. 2 only illustrates 3 GRUs as an example. The number of neural units per hidden layer is not limited, and may be 128, for example. In the application, the problem that the number of samples may cause overfitting is considered, so an elastic network (regularization network) is introduced on the basis of a hidden layer, regularization processing is performed on weight parameters in a GRU network, the size of a training set time window can be 3, the output dimensionality of a full connection layer is set to be Dense (1), and a target optimization function corresponding to an elastic network regularization term is shown in the following formula (8):
Figure BDA0003170690580000123
wherein, ytj=(yt1,yt2,...,ytj) Is the real value of the pickling concentration at the time t,
Figure BDA0003170690580000124
and m is the number of GRUs included in the neural network model to be trained (which can be 3 in the figure), and is the predicted value of the pickling concentration at the time t. Lambda [ alpha ]1And λ2And adjusting parameters for the preset regularization, wherein the parameters are used for balancing the relation between the original loss function and the regularization item. w is a1And w2And the weight parameters are weight parameters of the GRU network in the neural network model to be trained. The application can correct the regular term and adjust the parameter lambda1And λ2Different combinations of regularization terms can be obtained when lambda 10 and λ2When the value is 0, the model is a common GRU model; when lambda is1Not equal to 0 and λ2When being equal to 0, is L1A norm regularization network; when lambda is10 and λ2When not equal to 0, is L2A norm regularization network; when lambda is1Not equal to 0 and λ2And when not equal to 0, the regularization term is an elastic network regularization term.
Preferably, the present application introduces elastic mesh regularization, i.e., regularizing L1Norm and L2Setting a regularization parameter lambda by taking norm linear combination as a regularization item1=0.02、λ2=0.01。
In an embodiment, the model output layer is specifically configured to calculate, according to the memory retention information at the time t output by the hidden layer, the predicted acid washing concentration value at the time t output by the neural network model to be trained. The prediction calculation involved in the output layer is shown in the following equation (9):
yt=σ(woht) Formula (9)
Wherein, ytThe predicted value of the acid washing concentration at the time t, w, output by the neural network to be trainedoIs a preset fourth weight parameter, htTo said is hiddenAnd the memory of the time t output by the hidden layer retains information.
After the neural network model to be trained and the model parameters are constructed, the training set is input into the neural network model to be trained for training, and after iteration is finished, the trained neural network model (acid washing concentration prediction model) is stored.
Optionally, in the model training process, the Adam optimization algorithm can be further selected to optimize parameters of the neural network model to be trained. The Adam optimization algorithm can replace a first-order optimization algorithm of a traditional random gradient descent process, and weight parameters of the neural network are updated on the basis of data iteration of a training set, so that the output value of a loss function is optimal. Preferably, the iteration number of the model set in the model training process can be 200, or other values set by self-definition, and the like; the batch training data selected in each iteration may be 16, or other custom set values, etc., and the application is not limited thereto.
The reliability of the trained neural network model can be further verified by using a test set, a data threshold value can be specifically set, the test set is input into the trained neural network model, the prediction result of the model is evaluated through the calculated average absolute error (namely loss function), and if the prediction structure meets the error requirement (namely convergence), the trained neural network model is directly used as a final acid washing concentration prediction model; and if not, taking the trained neural network model as the neural network model to be trained, and continuing to train the neural network model to be trained by using the acquired training set.
Specifically, the test set is input into the trained neural network model to obtain a corresponding acid concentration result value. The prediction formula related to the model output layer is specifically shown in formula (9), and the trained neural network model is used for predicting the sample points in the test set. FIG. 4 is a diagram showing the effect of hydrochloric acid concentration prediction. As can be seen from FIG. 4, the predicted and actual values of the hydrochloric acid concentration are good.
And further performing inverse normalization on the acid concentration result output by the model to obtain an acid concentration predicted value. And comparing the predicted value of the acid concentration with the true value of the acid concentration to obtain an error result. Please refer to fig. 5, which shows a diagram of the absolute error of the hydrochloric acid concentration prediction. The broken line shown in fig. 5 is an allowable error range within which the hydrochloric acid concentration prediction error is.
Compared with the prior art, the method for predicting the strip steel pickling concentration based on the regularized GRU neural network has the following beneficial effects: the online real-time measurement of the acid concentration is realized, the problem of hysteresis existing in the traditional offline measurement process is solved, and the method has the characteristics of measurable variable, controllable model and high real-time property; deep data features of the variables are mined through a deep learning technology, wherein the deep data features comprise nonlinear features, historical trend features and the like; the calculation capability of the GRU neural network is utilized by combining the computer technology, and the requirement on the error range is considered at the same time, so that the proposed model has higher prediction precision; meanwhile, the acid concentration prediction performed by using the model also has better prediction performance; has important significance for improving the quality of strip steel products and reducing the pickling cost.
Based on the same inventive concept, another embodiment of the application provides a structural schematic diagram of a prediction device for strip steel pickling concentration and terminal equipment. Please refer to fig. 6, which is a schematic structural diagram of a device for predicting strip steel pickling concentration according to an embodiment of the present application. The apparatus shown in fig. 6 includes: an acquisition module 601 and a prediction module 602; wherein the content of the first and second substances,
the obtaining module 601 is configured to obtain data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, where the data to be analyzed includes solution temperature, solution density, and solution conductivity;
the prediction module 602 is configured to invoke a pickling concentration prediction model to perform concentration prediction on the data to be analyzed, so as to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution;
the acid washing concentration prediction model is a model which is trained on a neural network model to be trained, including an elastic network regularization term and a gated circulation unit GRU network in advance.
Optionally, the apparatus further comprises a training module 603, wherein,
the obtaining module 601 is further configured to obtain historical data of the cold-rolled strip steel pickling solution at n different times, where n is a positive integer;
the training module 603 is configured to train the neural network model to be trained, which includes an elastic network regularization term and a gated cycle unit GRU network, using the historical data, so as to obtain the pickling concentration prediction model.
Optionally, the gated cyclic unit GRU network is deployed in a hidden layer of the neural network model to be trained, and the GRU network is configured to:
calculating first memory information of the hidden layer at the time t according to historical data at the time t, memory reservation information output by the GRU network at the time (t-1) and a preset first weight parameter, wherein t is a positive integer not exceeding n;
calculating second memory information of the hidden layer at the t moment according to the historical data at the t moment, the memory reservation information output by the GRU network at the (t-1) moment and a preset second weight parameter;
calculating the current memory information of the hidden layer at the time t according to the second memory information, the memory reservation information output by the GRU network at the time (t-1) and a preset third weight parameter;
and calculating the memory retention information at the t moment output by the GRU network according to the first memory information, the current memory information and the memory retention information at the (t-1) moment output by the GRU network.
Optionally, the neural network model to be trained further includes an output layer, configured to:
and calculating the predicted acid washing concentration value at the t moment output by the neural network model to be trained according to the memory retention information at the t moment output by the hidden layer.
Optionally, the predicted value of the pickling concentration at the time t is:
yt=σ(woht)
wherein, ytAcid washing concentration at the t moment output by the neural network to be trainedDegree of prediction, woIs a preset fourth weight parameter, htAnd reserving information for the memory of the t moment output by the hidden layer.
Optionally, the elastic network regularization term is used to perform regularization processing on the weight parameters in the neural network model to be trained by using an objective optimization function, where the objective optimization function is:
Figure BDA0003170690580000161
wherein, ytjIs the real value of the pickling concentration at the time t,
Figure BDA0003170690580000162
is the predicted value of the acid cleaning concentration at the time t, m is the number of GRUs and lambda in the neural network model to be trained1And λ2Adjusting the parameters for preset regularization, w1And w2And the weight parameters are weight parameters in the neural network model to be trained.
Optionally, the acid washing concentration prediction model is obtained by updating the weight parameters in the neural network model to be trained by using a loss function, where the loss function is:
Figure BDA0003170690580000163
therein, LossMAEAs a function of said loss, yiThe predicted value y of the acid washing concentration output by the neural network model to be trainedrealAnd the real value of the pickling concentration in the historical data is obtained.
Optionally, the obtaining module 601 is specifically configured to:
acquiring initial data of the cold-rolled strip steel pickling solution at n different moments;
and carrying out normalization processing on the n initial data at different moments to obtain n historical data at different moments.
Optionally, the historical data is:
Figure BDA0003170690580000164
wherein, XtnormFor said historical data at time t, XtFor said initial data at time t, XmaxIs the largest initial data, X, among the initial data of the n momentsminThe minimum initial data in the initial data of the n time instants is obtained.
Please refer to fig. 7, which is a schematic structural diagram of a terminal device according to an embodiment of the present application. The terminal device shown in fig. 7 includes: at least one processor 701, a communication interface 702, a user interface 703 and a memory 704, where the processor 701, the communication interface 702, the user interface 703 and the memory 704 may be connected by a bus or by other means, and the embodiment of the present invention is exemplified by being connected by the bus 705. Wherein the content of the first and second substances,
processor 701 may be a general-purpose processor, such as a Central Processing Unit (CPU).
The communication interface 702 may be a wired interface (e.g., an ethernet interface) or a wireless interface (e.g., a cellular network interface or using a wireless local area network interface) for communicating with other terminals or websites. The user interface 703 may specifically be a touch panel, including a touch screen and a touch screen, for detecting an operation instruction on the touch panel, and the user interface 703 may also be a physical button or a mouse. The user interface 703 may also be a display screen for outputting, displaying images or data.
The Memory 704 may include Volatile Memory (Volatile Memory), such as Random Access Memory (RAM); the Memory may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); the memory 704 may also comprise a combination of the above types of memory. The memory 704 is used for storing a set of program codes, and the processor 701 is used for calling the program codes stored in the memory 704 to execute the following operations:
acquiring data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, wherein the data to be analyzed comprises solution temperature, solution density and solution conductivity;
calling a pickling concentration prediction model to carry out concentration prediction on the data to be analyzed to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution;
the acid washing concentration prediction model is a model which is trained on a neural network model to be trained, including an elastic network regularization term and a gated circulation unit GRU network in advance.
Optionally, before the calling the acid washing concentration prediction model to perform the concentration prediction on the data to be analyzed, the processor 701 is further configured to:
acquiring historical data of the cold-rolled strip steel pickling solution at n different moments, wherein n is a positive integer;
and training the neural network model to be trained including an elastic network regularization item and a gated circulation unit GRU network by using the historical data so as to obtain the pickling concentration prediction model.
Optionally, the gated cyclic unit GRU network is deployed in a hidden layer of the neural network model to be trained, and the GRU network is configured to:
calculating first memory information of the hidden layer at the time t according to historical data at the time t, memory reservation information output by the GRU network at the time (t-1) and a preset first weight parameter, wherein t is a positive integer not exceeding n;
calculating second memory information of the hidden layer at the t moment according to the historical data at the t moment, the memory reservation information output by the GRU network at the (t-1) moment and a preset second weight parameter;
calculating the current memory information of the hidden layer at the time t according to the second memory information, the memory reservation information output by the GRU network at the time (t-1) and a preset third weight parameter;
and calculating the memory retention information at the t moment output by the GRU network according to the first memory information, the current memory information and the memory retention information at the (t-1) moment output by the GRU network.
Optionally, the neural network model to be trained further includes an output layer, configured to:
and calculating the predicted acid washing concentration value at the t moment output by the neural network model to be trained according to the memory retention information at the t moment output by the hidden layer.
Optionally, the predicted value of the pickling concentration at the time t is:
yt=σ(woht)
wherein, ytThe predicted value of the acid washing concentration at the time t, w, output by the neural network to be trainedoIs a preset fourth weight parameter, htAnd reserving information for the memory of the t moment output by the hidden layer.
Optionally, the elastic network regularization term is used to perform regularization processing on the weight parameters in the neural network model to be trained by using an objective optimization function, where the objective optimization function is:
Figure BDA0003170690580000181
wherein, ytjIs the real value of the pickling concentration at the time t,
Figure BDA0003170690580000182
is the predicted value of the acid cleaning concentration at the time t, m is the number of GRUs and lambda in the neural network model to be trained1And λ2Adjusting the parameters for preset regularization, w1And w2And the weight parameters are weight parameters in the neural network model to be trained.
Optionally, the acid washing concentration prediction model is obtained by updating the weight parameters in the neural network model to be trained by using a loss function, where the loss function is:
Figure BDA0003170690580000191
therein, LossMAEAs a function of said loss, yiThe predicted value y of the acid washing concentration output by the neural network model to be trainedrealAnd the real value of the pickling concentration in the historical data is obtained.
Optionally, the obtaining the historical data of the cold-rolled steel strip pickling solution at n different times includes:
acquiring initial data of the cold-rolled strip steel pickling solution at n different moments;
and carrying out normalization processing on the n initial data at different moments to obtain n historical data at different moments.
Optionally, the historical data is:
Figure BDA0003170690580000192
wherein, XtnormFor said historical data at time t, XtFor said initial data at time t, XmaxIs the largest initial data, X, among the initial data of the n momentsminThe minimum initial data in the initial data of the n time instants is obtained.
By implementing the method, the data to be analyzed of the cold-rolled strip steel pickling solution at the current moment are obtained, and then a pickling concentration prediction model is directly called to carry out concentration prediction on the data to be analyzed, so that a pickling concentration prediction value of the cold-rolled strip steel pickling solution is obtained; the acid concentration prediction model is a neural network model with high prediction precision, so that the operation flow or steps of concentration prediction can be saved on the premise of ensuring the acid concentration prediction precision, the prediction performance and convenience of acid concentration prediction can be improved, and the automatic, online and convenient prediction of the acid concentration is realized.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program includes, when executed, some or all of the steps of the method described in the above method embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for predicting the pickling concentration of strip steel is characterized by comprising the following steps:
acquiring data to be analyzed of the cold-rolled strip steel pickling solution at the current moment, wherein the data to be analyzed comprises solution temperature, solution density and solution conductivity;
calling a pickling concentration prediction model to carry out concentration prediction on the data to be analyzed to obtain a pickling concentration prediction value of the cold-rolled strip steel pickling solution;
the acid washing concentration prediction model is a model which is trained on a neural network model to be trained in advance, and the neural network model to be trained comprises an elastic network regularization term and a gated circulation unit GRU network.
2. The method of claim 1, wherein prior to invoking the acid wash concentration prediction model to make a concentration prediction of the data to be analyzed, the method further comprises:
acquiring historical data of the cold-rolled strip steel pickling solution at n different moments, wherein n is a positive integer;
and training the neural network model to be trained by utilizing the historical data so as to obtain the pickling concentration prediction model.
3. The method of claim 2, wherein the gated round robin unit (GRU) network is deployed at a hidden layer of the neural network model to be trained, the GRU network being configured to:
calculating first memory information of the hidden layer at the time t according to historical data at the time t, memory reservation information output by the GRU network at the time (t-1) and a preset first weight parameter, wherein t is a positive integer not exceeding n;
calculating second memory information of the hidden layer at the t moment according to the historical data at the t moment, the memory reservation information output by the GRU network at the (t-1) moment and a preset second weight parameter;
calculating the current memory information of the hidden layer at the time t according to the second memory information, the memory reservation information output by the GRU network at the time (t-1) and a preset third weight parameter;
and calculating the memory retention information at the t moment output by the GRU network according to the first memory information, the current memory information and the memory retention information at the (t-1) moment output by the GRU network.
4. The method of claim 3, wherein the neural network model to be trained further comprises an output layer for:
and calculating the predicted acid washing concentration value at the t moment output by the neural network model to be trained according to the memory retention information at the t moment output by the hidden layer.
5. The method according to claim 4, wherein the predicted pickling concentration value at the time t is as follows:
yt=σ(woht)
wherein, ytThe predicted value of the acid washing concentration at the time t, w, output by the neural network to be trainedoIs a preset fourth weight parameter, htAnd reserving information for the memory of the t moment output by the hidden layer.
6. The method according to claim 2, wherein the elastic network regularization term is used for regularizing weight parameters in the neural network model to be trained by using an objective optimization function, where the objective optimization function is:
Figure FDA0003170690570000021
wherein, ytjIs the real value of the pickling concentration at the time t,
Figure FDA0003170690570000022
is the predicted value of the acid cleaning concentration at the time t, m is the number of GRUs and lambda in the neural network model to be trained1And λ2Adjusting the parameters for preset regularization, w1And w2And the weight parameters are weight parameters in the neural network model to be trained.
7. The method according to claim 2, wherein the acid pickling concentration prediction model is obtained by updating weight parameters in the neural network model to be trained by using a loss function, and the loss function is as follows:
Figure FDA0003170690570000023
therein, LossMAEAs a function of said loss, yiThe predicted value y of the acid washing concentration output by the neural network model to be trainedrealAnd the real value of the pickling concentration in the historical data is obtained.
8. The method of claim 2, wherein the obtaining historical data of the cold-rolled steel strip pickling solution at n different time instants comprises:
acquiring initial data of the cold-rolled strip steel pickling solution at n different moments;
and carrying out normalization processing on the n initial data at different moments to obtain n historical data at different moments.
9. The method of claim 8, wherein the historical data is:
Figure FDA0003170690570000031
wherein, XtnormFor said historical data at time t, XtFor said initial data at time t, XmaxIs the largest initial data, X, among the initial data of the n momentsminThe minimum initial data in the initial data of the n time instants is obtained.
10. A computer-readable storage medium, comprising computer instructions which, when executed on a terminal device, cause the terminal device to perform the method of predicting strip pickling concentration as claimed in any one of claims 1 to 9.
CN202110817426.6A 2021-07-20 2021-07-20 Method for predicting strip steel pickling concentration and computer readable storage medium Pending CN113570129A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110817426.6A CN113570129A (en) 2021-07-20 2021-07-20 Method for predicting strip steel pickling concentration and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110817426.6A CN113570129A (en) 2021-07-20 2021-07-20 Method for predicting strip steel pickling concentration and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113570129A true CN113570129A (en) 2021-10-29

Family

ID=78165630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110817426.6A Pending CN113570129A (en) 2021-07-20 2021-07-20 Method for predicting strip steel pickling concentration and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113570129A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115821265A (en) * 2022-12-14 2023-03-21 苏州圆格电子有限公司 Method and system for removing copper layer on surface of neodymium iron boron

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104931538A (en) * 2015-06-10 2015-09-23 中冶南方工程技术有限公司 Learning type hydrochloric acid concentration and Fe ion concentration on-line detection system and method
CN110930318A (en) * 2019-10-31 2020-03-27 中山大学 Low-dose CT image repairing and denoising method
CN111207739A (en) * 2018-11-22 2020-05-29 千寻位置网络有限公司 Pedestrian walking zero-speed detection method and device based on GRU neural network
CN111581888A (en) * 2020-05-18 2020-08-25 中车永济电机有限公司 Construction method of residual service life prediction model of wind turbine bearing
CN111968039A (en) * 2019-05-20 2020-11-20 北京航空航天大学 Day and night universal image processing method, device and equipment based on silicon sensor camera
CN112015719A (en) * 2020-08-27 2020-12-01 河海大学 Regularization and adaptive genetic algorithm-based hydrological prediction model construction method
CN112241608A (en) * 2020-10-13 2021-01-19 国网湖北省电力有限公司电力科学研究院 Lithium battery life prediction method based on LSTM network and transfer learning
CN112242959A (en) * 2019-07-16 2021-01-19 中国移动通信集团浙江有限公司 Micro-service current-limiting control method, device, equipment and computer storage medium
CN112417153A (en) * 2020-11-20 2021-02-26 虎博网络技术(上海)有限公司 Text classification method and device, terminal equipment and readable storage medium
CN112906291A (en) * 2021-01-25 2021-06-04 武汉纺织大学 Neural network-based modeling method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104931538A (en) * 2015-06-10 2015-09-23 中冶南方工程技术有限公司 Learning type hydrochloric acid concentration and Fe ion concentration on-line detection system and method
CN111207739A (en) * 2018-11-22 2020-05-29 千寻位置网络有限公司 Pedestrian walking zero-speed detection method and device based on GRU neural network
CN111968039A (en) * 2019-05-20 2020-11-20 北京航空航天大学 Day and night universal image processing method, device and equipment based on silicon sensor camera
CN112242959A (en) * 2019-07-16 2021-01-19 中国移动通信集团浙江有限公司 Micro-service current-limiting control method, device, equipment and computer storage medium
CN110930318A (en) * 2019-10-31 2020-03-27 中山大学 Low-dose CT image repairing and denoising method
CN111581888A (en) * 2020-05-18 2020-08-25 中车永济电机有限公司 Construction method of residual service life prediction model of wind turbine bearing
CN112015719A (en) * 2020-08-27 2020-12-01 河海大学 Regularization and adaptive genetic algorithm-based hydrological prediction model construction method
CN112241608A (en) * 2020-10-13 2021-01-19 国网湖北省电力有限公司电力科学研究院 Lithium battery life prediction method based on LSTM network and transfer learning
CN112417153A (en) * 2020-11-20 2021-02-26 虎博网络技术(上海)有限公司 Text classification method and device, terminal equipment and readable storage medium
CN112906291A (en) * 2021-01-25 2021-06-04 武汉纺织大学 Neural network-based modeling method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115821265A (en) * 2022-12-14 2023-03-21 苏州圆格电子有限公司 Method and system for removing copper layer on surface of neodymium iron boron
CN115821265B (en) * 2022-12-14 2023-12-19 苏州圆格电子有限公司 Method and system applied to removing copper layer on surface of neodymium iron boron

Similar Documents

Publication Publication Date Title
CN109978228B (en) PM2.5 concentration prediction method, device and medium
CN110223517B (en) Short-term traffic flow prediction method based on space-time correlation
CN109816095B (en) Network flow prediction method based on improved gated cyclic neural network
CN110909926A (en) TCN-LSTM-based solar photovoltaic power generation prediction method
CN111563706A (en) Multivariable logistics freight volume prediction method based on LSTM network
Ren et al. MCTAN: A novel multichannel temporal attention-based network for industrial health indicator prediction
CN110135635B (en) Regional power saturated load prediction method and system
CN110751318A (en) IPSO-LSTM-based ultra-short-term power load prediction method
CN111030889B (en) Network traffic prediction method based on GRU model
CN113449919B (en) Power consumption prediction method and system based on feature and trend perception
CN113128671B (en) Service demand dynamic prediction method and system based on multi-mode machine learning
CN112947300A (en) Virtual measuring method, system, medium and equipment for processing quality
CN111027672A (en) Time sequence prediction method based on interactive multi-scale recurrent neural network
CN111784061A (en) Training method, device and equipment for power grid engineering cost prediction model
CN116307215A (en) Load prediction method, device, equipment and storage medium of power system
CN114548591A (en) Time sequence data prediction method and system based on hybrid deep learning model and Stacking
CN112803398A (en) Load prediction method and system based on empirical mode decomposition and deep neural network
CN114694379B (en) Traffic flow prediction method and system based on self-adaptive dynamic graph convolution
CN113868938A (en) Short-term load probability density prediction method, device and system based on quantile regression
CN113570129A (en) Method for predicting strip steel pickling concentration and computer readable storage medium
CN107704944B (en) Construction method of stock market fluctuation interval prediction model based on information theory learning
CN113393034A (en) Electric quantity prediction method of online self-adaptive OSELM-GARCH model
CN113151842A (en) Method and device for determining conversion efficiency of wind-solar complementary water electrolysis hydrogen production
CN116542701A (en) Carbon price prediction method and system based on CNN-LSTM combination model
CN116665798A (en) Air pollution trend early warning method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination