CN110516735B - Natural gas pipeline event classification method based on LSTM network and Adam algorithm - Google Patents

Natural gas pipeline event classification method based on LSTM network and Adam algorithm Download PDF

Info

Publication number
CN110516735B
CN110516735B CN201910788690.4A CN201910788690A CN110516735B CN 110516735 B CN110516735 B CN 110516735B CN 201910788690 A CN201910788690 A CN 201910788690A CN 110516735 B CN110516735 B CN 110516735B
Authority
CN
China
Prior art keywords
network
lstm
natural gas
gas pipeline
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910788690.4A
Other languages
Chinese (zh)
Other versions
CN110516735A (en
Inventor
安阳
王筱岑
曲志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Science and Technology
Original Assignee
Tianjin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Science and Technology filed Critical Tianjin University of Science and Technology
Priority to CN201910788690.4A priority Critical patent/CN110516735B/en
Publication of CN110516735A publication Critical patent/CN110516735A/en
Application granted granted Critical
Publication of CN110516735B publication Critical patent/CN110516735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Water Supply & Treatment (AREA)
  • Quality & Reliability (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Public Health (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Examining Or Testing Airtightness (AREA)
  • Pipeline Systems (AREA)

Abstract

The invention relates to a natural gas pipeline event classification method based on an LSTM network and an Adam algorithm, and belongs to the field of online safety monitoring of natural gas pipelines. The process comprises the following steps: firstly, extracting an original sample, carrying out normalization processing and label marking, and then dividing the original sample into a training set and a verification set. Secondly, constructing an LSTM network model and inputting the normalized sample into a network for training and verification. Outputting a network model when the network training precision meets the requirement; otherwise, the network training is performed again. And finally, inputting the normalized new sample into a trained network model, and thus, the classification and identification of the events such as the blockage, leakage, curve and the like of the hydrate of the natural gas pipeline can be realized. The invention has the advantages that the LSTM network and the Adam algorithm are adopted to classify the pipeline events without feature extraction pretreatment, and the network training speed is high, so that the pipeline events can be accurately identified.

Description

Natural gas pipeline event classification method based on LSTM network and Adam algorithm
Technical Field
The invention relates to a natural gas pipeline event classification method based on an LSTM network and an Adam algorithm, and belongs to the field of online safety monitoring of natural gas pipelines.
Background
Because natural gas has the advantages of low pollution, high heat value and the like, the ratio of natural gas to energy consumption structure is improved year by year. Pipelines have received great attention as one of the primary modes of natural gas transportation for safe operation. However, natural gas hydrates are extremely easily produced under high pressure and low temperature environments, thereby affecting the normal transport of natural gas. Accidents of pipe leakage due to corrosion, plugging or third party construction can also cause serious safety accidents. Furthermore, bends present interference with acoustic-based pipe anomaly detection and localization. Thus, classifying and identifying natural gas pipeline events may provide effective guidance for pipeline operators to take corresponding maintenance measures.
At present, research on a pipeline abnormal event detection and positioning method has been carried out at home and abroad, and corresponding results have been published. Acoustic emission based techniques have been proposed and used for monitoring the flow regime and crystallization process of gas hydrates, which can detect the agglomeration and formation of crystals from the amplitude of the acoustic signal. A system for detecting the hydrate blockage of a natural gas pipeline based on a pressure wave propagation method is designed for detecting blockage, so that the positioning of the blockage of a monohydrate and the judgment of the blockage degree are realized. Aiming at pipeline leakage, a natural gas pipeline multipoint leakage detection method based on observer and mixed integer partial differential equation constraint optimization is provided, and the discretization method can greatly reduce the calculated amount when solving the leakage position. A leak detection method based on kernel principal component analysis and a support vector machine is provided. Feature extraction of the acoustic signal is achieved through analysis of the nuclear principal components, and then recognition of the leakage level is achieved through a support vector machine. However, the above-described methods do not enable simultaneous detection and localization of different types of pipe events. For this purpose, an active acoustic excitation technology is proposed and applied to the on-line monitoring of the hydrate blockage of natural gas pipelines. This technique has also proved suitable for monitoring of pipe leaks afterwards. In addition, the "energy-pattern" method based on wavelet packet analysis and the chaotic characteristic analysis method are proposed for classification recognition of hydrate blockage and pipe leakage. In order to expand the monitoring range of the system on the premise of ensuring the spatial resolution of the system, the natural gas pipeline safety monitoring method based on acoustic pulse compression is provided, and the method well solves the contradiction between the monitoring range and the spatial resolution. However, the signal characteristics after matched filtering are not obvious, and the characteristics extraction and classification recognition effects are poor by applying the traditional method.
Therefore, the LSTM network is applied to natural gas pipeline event classification, and one-dimensional time series samples can be directly input into the network for training without feature extraction pretreatment. In the network training process, an Adam algorithm is adopted to accelerate network convergence. The method can realize high-precision classification and identification of hydrate blockage and pipeline leakage, accurately eliminate curve interference, and provide effective guidance for pipeline operators to take corresponding maintenance measures.
Disclosure of Invention
The invention aims to provide a natural gas pipeline event classification method based on an LSTM network and an Adam algorithm, which can effectively eliminate curve interference and realize classification and identification of hydrate blockage and pipeline leakage.
The technical scheme of the invention is as follows: a natural gas pipeline event classification method based on an LSTM network and an Adam algorithm comprises the following steps:
1) The method comprises the steps that after being matched and filtered, different event reflection signals collected by a natural gas pipeline safety monitoring system are used as original samples, the original samples are labeled, and the original samples are divided into a training set and a testing set;
2) Constructing an LSTM network, comprising: an input layer, an LSTM layer, a full connection layer and an output layer. Setting various parameters of the network, including: the number of LSTM layers, the number of LSTM hidden layer neurons, the number of full-connection layer layers, the full-connection layer activation function and the number of iterations;
3) And normalizing the original sample, and inputting the normalized original sample into the constructed network for training and verification. In the network training process, the single LSTM cell operates as follows:
(1) Input value x of current time step t <t> With the activation value a of the last time step t-1 <t-1> A combination, which can be expressed as:
Figure BSA0000188855140000021
(2) The candidate cell states at time step t are:
Figure BSA0000188855140000022
(3) Find update gate i <t> Forgetting door f <t> Output door o <t>
i <t> =σ(W u x con +b u ) (3)
f <t> =σ(W f x con +b f ) (4)
o <t> =σ(W o x con +b o ) (5)
Wherein W is a weight matrix, b is a bias, and sigma represents the activation functions of the update gate, the forget gate and the output gate.
(4) Forgetting door f <t> And updating gate i <t> Can be used to determine the cell state C at time step t-1 <t-1> Candidate cell state
Figure BSA0000188855140000026
Whether or not to be reserved. Thus, the current cell state C <t> Can be updated as:
Figure BSA0000188855140000027
wherein the symbol "x" represents Hadamard multiplication.
(5) Updating the activation value at time step t:
a <t> =o <t> ×tanhC <t> (7)
4) After a plurality of LSTM cells are connected in series according to time steps to form an LSTM layer, the dimension of the output vector of the LSTM layer is converted by the full-connection layer and converted into probability distribution, and the probability distribution of the output layer can be expressed as:
Figure BSA0000188855140000023
wherein a is the output vector of the full connection layer, a j N is the number of tag states for the j-th element in the vector.
To train the network, cross entropy is also introduced as a loss function of a single sample, which can be expressed as:
Figure BSA0000188855140000024
where y is the true class of the sample and represents a tag encoding vector. For N samples, the total cross entropy Loss can be expressed as:
Figure BSA0000188855140000025
the goal of LSTM network training is to adjust the weights W and offsets b by constant iteration using a specified optimization algorithm, so that the cross entropy loss function is minimized.
5) The Adam algorithm is used to adjust the network weights W and bias b so that the cross entropy loss function is minimized. The Adam algorithm performs as follows:
(1) Solving for first moment estimate V of gradient dW 、V db And second moment estimation S dW 、S db
V dW :=β 1 V dW +(1-β 1 )dW (11)
V db :=β 1 V db +(1-β 1 )db (12)
S dW :=β 2 S dW +(1-β 2 )dW 2 (13)
S db :=β 2 S db +(1-β 2 )db 2 (14)
Wherein beta is 1 And beta 2 Exponential decay rate of first-order and second-order moment estimation, the matrices V and S on the right side are moment estimation of last iteration, and the sign is: = "represents an assignment operation.
(2) Correcting the first moment estimate and the second moment estimate:
Figure BSA0000188855140000031
Figure BSA0000188855140000032
Figure BSA0000188855140000033
Figure BSA0000188855140000034
wherein k is the number of iterations,
Figure BSA0000188855140000035
and->
Figure BSA0000188855140000036
Beta respectively 1 And beta 2 To the power of k.
(3) By estimating the update parameters W and b by the modified moments, the updated weights W and bias b can be expressed as:
Figure BSA0000188855140000037
Figure BSA0000188855140000038
wherein alpha is learning rate, defaults to 0.0001, epsilon is 10 -8
6) And judging whether the network precision meets the requirement. If yes, outputting the trained network model; otherwise, network training is carried out again;
7) And testing the network model with the label and the new sample, wherein the input precision of the network model meets the requirement, and comparing the network output type with the sample label to test the network model.
And when the original sample is extracted, only the main lobe of the reflection signal after matched filtering is extracted, and the size of the original sample is 1 multiplied by 408.
The normalization method is Z-score normalization.
The label mark is One-Hot coding mark.
The number of LSTM layers, the number of LSTM hidden layer neurons, the number of full-connection layer neurons and the number of iterations are all adjustable, preferably the number of LSTM layers is 3, the number of LSTM hidden layer neurons is 64, the number of full-connection layer neurons is 2, and the number of iterations is 30.
The activation functions Sigmoid functions of the update gate, the forget gate and the output gate.
The activation function of the middle fully-connected layer is adjustable, and is preferably a Relu function, and the activation function of the top fully-connected layer is a Softmax function.
The invention has the first advantage that the high-precision classification and identification of the natural gas pipeline event can be realized by utilizing the LSTM network without sample feature extraction pretreatment; a second advantage is that the network training speed can be increased using Adam's algorithm.
Drawings
FIG. 1 is a process flow diagram of the present invention.
Fig. 2 is a diagram of the LSTM network structure of the present invention.
FIG. 3 is a sample graph of pipeline events according to the present invention.
Fig. 4 is a training result diagram of the present invention.
FIG. 5 is a graph of the test results of the present invention.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
fig. 1 shows a process flow diagram of the present invention. The specific process is as follows: A. extracting an original sample; B. normalizing the original sample and marking the label; C. constructing an LSTM network and setting network parameters; D. performing network training and verification, wherein 80% of all samples are used as training sets, and 20% are used as verification sets; E. judging whether the network precision meets the requirement; if not, returning to the step C; if yes, carrying out the step F; F. outputting a network model with the accuracy meeting the requirement; G. and performing network test by using the normalized new sample.
Fig. 2 shows an LSTM network structure enveloping an input layer, three LSTM layers, two fully connected layers, and an output layer. The normalized samples may be directly input to the LSTM network input layer. Then, through the LSTM layer, long-term correlations of the input sample sequence may be captured. An intermediate fully-connected layer follows the LSTM layer, whose activation function is the Relu function, for converting the output dimensions of the LSTM layer. Finally, a full connection layer is connected to the top layer, and the activation function is a Softmax function, which is used for converting the probability distribution of the output of the middle full connection layer to pipeline events. And the output layer directly outputs the probability distribution output by the top full-connection layer.
Fig. 3 (a) - (c) are the hydrate plug, pipe leak and bend raw sample signals, respectively. The three types of events are almost identical in waveform except for the difference in amplitude. In order to reduce the difficulty of classifying and identifying the pipeline event, the original sample data is firstly subjected to normalization preprocessing. As shown in the results of the Z-Score normalization in fig. 3 (d) - (e), the waveforms and the amplitudes of the three types of samples are obviously different, so that the difficulty of classifying and identifying the pipeline event is effectively reduced.
Fig. 4 shows cross entropy loss and training accuracy during training. The horizontal axes of fig. 4 (a) and (b) each represent the number of iterations, and the vertical axes represent the cross entropy loss and training accuracy, respectively. In the network training process, the cross entropy loss function converges rapidly and stably. Meanwhile, training accuracy is also steadily improved. When the specified number of iterations is reached, the cross entropy loss is 0.0005 and the training accuracy is 100.00%. To verify the accuracy of the network training to adjust the parameters, the verification set is input into the training model with a training accuracy of 100%. Therefore, the model accuracy meets the requirement.
Fig. 5 shows a network test chart. And inputting 30 new samples which do not participate in training and verification and are normalized into a network model which is trained and meets the precision requirement for testing. As shown in FIG. 5, the histogram represents the output probability distribution, which is close to the probability distribution of the new sample signature. Thus, these new samples can be correctly classified as hydrate plugs, pipeline leaks, and bends with 100% accuracy demonstrating that the network model can be effectively applied to classification of natural gas pipeline events.

Claims (7)

1. The natural gas pipeline event classification method based on the LSTM network and the Adam algorithm is characterized by comprising the following steps of:
1) The method comprises the steps that after being matched and filtered, different event reflection signals collected by a natural gas pipeline safety monitoring system are used as original samples, the original samples are labeled, and the original samples are divided into a training set and a testing set;
2) Constructing an LSTM network, comprising: an input layer, an LSTM layer, a full connection layer and an output layer; setting various parameters of the network, including: the number of LSTM layers, the number of LSTM hidden layer neurons, the number of full-connection layer layers, the full-connection layer activation function and the number of iterations;
3) Normalizing the original sample, and inputting the normalized original sample into a constructed network for training and verification; in the network training process, the single LSTM cell operates as follows:
(1) Input value x of current time step t <t> With the activation value a of the last time step t-1 <t-1> A combination, which can be expressed as:
Figure FSB0000204213640000011
(2) The candidate cell states at time step t are:
Figure FSB0000204213640000012
(3) Find update gate i <t> Forgetting door f <t> Output door o <t>
i <t> =σ(W u x con +b u ) (3)
f <t> =σ(W f x con +b f ) (4)
o <t> =σ(W o x con +b o ) (5)
Wherein W is a weight matrix, b is a bias, and sigma represents an activation function of an update gate, a forget gate and an output gate;
(4) Forgetting door f <t> And updating gate i <t> Can be used to determine the cell state C at time step t-1 <t-1> Candidate cell state
Figure FSB0000204213640000013
Whether or not to be reserved; thus, the current cell state C <t> Can be updated as:
Figure FSB0000204213640000014
wherein the symbol "x" represents Hadamard multiplication;
(5) Updating the activation value at time step t:
a <t> =o <t> ×tanhC <t> (7)
4) After a plurality of LSTM cells are connected in series according to time steps to form an LSTM layer, the dimension of the output vector of the LSTM layer is converted by the full-connection layer and converted into probability distribution, and the probability distribution of the output layer can be expressed as:
Figure FSB0000204213640000015
wherein a is the output vector of the full connection layer, a j N is the number of tag states for the j-th element in the vector;
to train the network, cross entropy is introduced as a loss function of a single sample, which can be expressed as:
Figure FSB0000204213640000016
wherein y is the true class of the sample and represents a tag coding vector; for N samples, the total cross entropy Loss can be expressed as:
Figure FSB0000204213640000017
the LSTM network training aims at adjusting the weight W and the bias b by using a specified optimization algorithm through continuous iteration so as to minimize a cross entropy loss function;
5) Adopting an Adam algorithm to adjust a network weight W and a bias b so as to minimize a cross entropy loss function; the Adam algorithm performs as follows:
(1) Solving for first moment estimate V of gradient dW 、V db And second moment estimation S dW 、S db
V dW :=β 1 V dW +(1-β 1 )dW (11)
V db :=β 1 V db +(1-β 1 )db (12)
S dW :=β 2 S dW +(1-β 2 )dW 2 (13)
S db :=β 2 S db +(1-β 2 )db 2 (14)
Wherein beta is 1 And beta 2 Exponential decay rate of first-order and second-order moment estimation, the matrices V and S on the right side are moment estimation of last iteration, and the sign is: = "represents an assignment operation;
(2) Correcting the first moment estimate and the second moment estimate:
Figure FSB0000204213640000021
Figure FSB0000204213640000022
Figure FSB0000204213640000023
Figure FSB0000204213640000024
wherein k is the number of iterations,
Figure FSB0000204213640000025
and->
Figure FSB0000204213640000026
Beta respectively 1 And beta 2 To the k-th power of (2);
(3) By estimating the update parameters W and b by the modified moments, the updated weights W and bias b can be expressed as:
Figure FSB0000204213640000027
Figure FSB0000204213640000028
wherein alpha is learning rate, defaults to 0.0001, epsilon is 10 -8
6) Judging whether the network precision meets the requirement; if yes, outputting the trained network model; otherwise, network training is carried out again;
7) And testing the network model with the label and the new sample, wherein the input precision of the network model meets the requirement, and comparing the network output type with the sample label to test the network model.
2. The natural gas pipeline event classification method based on the LSTM network and Adam algorithm as claimed in claim 1, wherein: and when the original sample is extracted, only the main lobe of the reflection signal after matched filtering is extracted, and the size of the original sample is 1 multiplied by 408.
3. The natural gas pipeline event classification method based on the LSTM network and Adam algorithm as claimed in claim 1, wherein: the normalization method is Z-score normalization.
4. The natural gas pipeline event classification method based on the LSTM network and Adam algorithm as claimed in claim 1, wherein: the label mark is One-Hot coding mark.
5. The natural gas pipeline event classification method based on the LSTM network and Adam algorithm as claimed in claim 1, wherein: the number of LSTM layers, the number of LSTM hidden layer neurons, the number of full-connection layer layers and the number of iterations are all adjustable.
6. The natural gas pipeline event classification method based on the LSTM network and Adam algorithm as claimed in claim 1, wherein: the activation functions of the update gate, the forget gate and the output gate are Sigmoid functions.
7. The natural gas pipeline event classification method based on the LSTM network and Adam algorithm as claimed in claim 1, wherein: the activation function of the fully connected layer is adjustable.
CN201910788690.4A 2019-08-27 2019-08-27 Natural gas pipeline event classification method based on LSTM network and Adam algorithm Active CN110516735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910788690.4A CN110516735B (en) 2019-08-27 2019-08-27 Natural gas pipeline event classification method based on LSTM network and Adam algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910788690.4A CN110516735B (en) 2019-08-27 2019-08-27 Natural gas pipeline event classification method based on LSTM network and Adam algorithm

Publications (2)

Publication Number Publication Date
CN110516735A CN110516735A (en) 2019-11-29
CN110516735B true CN110516735B (en) 2023-05-26

Family

ID=68626837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910788690.4A Active CN110516735B (en) 2019-08-27 2019-08-27 Natural gas pipeline event classification method based on LSTM network and Adam algorithm

Country Status (1)

Country Link
CN (1) CN110516735B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111257934B (en) * 2020-01-17 2022-03-11 哈尔滨工业大学 Seismic oscillation peak acceleration prediction method based on second-order neuron deep neural network
CN111275677A (en) * 2020-01-17 2020-06-12 哈尔滨工业大学 Ceiling earthquake damage identification method based on convolutional neural network
CN112529062B (en) * 2020-12-04 2021-06-15 齐鲁工业大学 Object classification method based on dexterous hand touch information
CN113141349B (en) * 2021-03-23 2022-07-15 浙江工业大学 HTTPS encrypted flow classification method with self-adaptive fusion of multiple classifiers
CN114918735A (en) * 2022-05-19 2022-08-19 河海大学 PCC-LSTM-based milling cutter wear prediction method
CN117408165B (en) * 2023-12-14 2024-03-15 中国石油大学(华东) Fracturing process complex event intelligent early warning optimization method based on machine learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268860A (en) * 2018-02-09 2018-07-10 重庆科技学院 A kind of gas gathering and transportation station equipment image classification method based on convolutional neural networks
CN108304917A (en) * 2018-01-17 2018-07-20 华南理工大学 A kind of P300 signal detecting methods based on LSTM networks
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109583346A (en) * 2018-11-21 2019-04-05 齐鲁工业大学 EEG feature extraction and classifying identification method based on LSTM-FC
CN109614885A (en) * 2018-11-21 2019-04-12 齐鲁工业大学 A kind of EEG signals Fast Classification recognition methods based on LSTM
CN109738776A (en) * 2019-01-02 2019-05-10 华南理工大学 Fan converter open-circuit fault recognition methods based on LSTM

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304917A (en) * 2018-01-17 2018-07-20 华南理工大学 A kind of P300 signal detecting methods based on LSTM networks
CN108268860A (en) * 2018-02-09 2018-07-10 重庆科技学院 A kind of gas gathering and transportation station equipment image classification method based on convolutional neural networks
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109583346A (en) * 2018-11-21 2019-04-05 齐鲁工业大学 EEG feature extraction and classifying identification method based on LSTM-FC
CN109614885A (en) * 2018-11-21 2019-04-12 齐鲁工业大学 A kind of EEG signals Fast Classification recognition methods based on LSTM
CN109738776A (en) * 2019-01-02 2019-05-10 华南理工大学 Fan converter open-circuit fault recognition methods based on LSTM

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Adam优化GRU神经网络的SCADA系统入侵检测方法;陈土生;《现代计算机》(第15期);13-19 *

Also Published As

Publication number Publication date
CN110516735A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110516735B (en) Natural gas pipeline event classification method based on LSTM network and Adam algorithm
US11507049B2 (en) Method for detecting abnormity in unsupervised industrial system based on deep transfer learning
CN109308522B (en) GIS fault prediction method based on recurrent neural network
CN106950276B (en) Pipeline defect depth inversion method based on convolutional neural network
CN102122133B (en) Self-adaption wavelet neural network abnormity detection and fault diagnosis classification system and method
CN115758212B (en) Mechanical equipment fault diagnosis method based on parallel network and transfer learning
CN111580151B (en) SSNet model-based earthquake event time-of-arrival identification method
CN111814699B (en) Deep learning earthquake prediction method for SWARM electromagnetic satellite data
CN109886433A (en) The method of intelligent recognition city gas pipeline defect
CN102158486A (en) Method for rapidly detecting network invasion
CN115906949B (en) Petroleum pipeline fault diagnosis method and system, storage medium and petroleum pipeline fault diagnosis equipment
CN111122811A (en) Sewage treatment process fault monitoring method of OICA and RNN fusion model
CN116451567A (en) Leakage assessment and intelligent disposal method for gas negative pressure extraction pipeline
CN105823801B (en) A kind of electronic nose drift compensation method based on deepness belief network feature extraction
CN117809164A (en) Substation equipment fault detection method and system based on multi-mode fusion
CN109632942B (en) Inversion method of pipeline defect size based on ensemble learning
CN116680639A (en) Deep-learning-based anomaly detection method for sensor data of deep-sea submersible
CN116522334A (en) RTL-level hardware Trojan detection method based on graph neural network and storage medium
CN116502163A (en) Vibration monitoring data anomaly detection method based on multi-feature fusion and deep learning
CN114066819B (en) Environmental corrosion severity identification method based on convolutional neural network deep learning
CN113671564B (en) NARX dynamic neural network-based microseism effective event automatic pickup method
CN113008998B (en) Concealed engineering internal defect judgment method based on PCNN
CN113238197B (en) Radar target identification and judgment method based on Bert and BiLSTM
CN113657520A (en) Intrusion detection method based on deep confidence network and long-time and short-time memory network
CN113065602A (en) Method and device for diagnosing valve fault of fracturing pump

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant