CN113536682B - Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism - Google Patents
Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism Download PDFInfo
- Publication number
- CN113536682B CN113536682B CN202110824289.9A CN202110824289A CN113536682B CN 113536682 B CN113536682 B CN 113536682B CN 202110824289 A CN202110824289 A CN 202110824289A CN 113536682 B CN113536682 B CN 113536682B
- Authority
- CN
- China
- Prior art keywords
- data
- model
- extrapolation
- time sequence
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013213 extrapolation Methods 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 51
- 230000004927 fusion Effects 0.000 title claims abstract description 45
- 238000006731 degradation reaction Methods 0.000 title claims abstract description 22
- 230000015556 catabolic process Effects 0.000 title claims abstract description 20
- 230000007246 mechanism Effects 0.000 title claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 82
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 51
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000012360 testing method Methods 0.000 claims abstract description 7
- 238000013507 mapping Methods 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims description 44
- 238000011176 pooling Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 230000002441 reversible effect Effects 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000004088 simulation Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 9
- 238000002347 injection Methods 0.000 description 8
- 239000007924 injection Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000010606 normalization Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004401 flow injection analysis Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/04—Ageing analysis or optimisation against ageing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on a secondary self-coding fusion mechanism, which comprises the following steps: acquiring fault prediction data of an electric hydraulic steering engine; comprehensively preprocessing the fault data to obtain a training data set and a test data set; constructing a time sequence extrapolation predictor: the time sequence extrapolation predictor comprises a convolutional neural network primary self-encoder, an artificial time domain feature extractor based on expert knowledge and a SAE-based secondary self-encoder; the time sequence extrapolation predictor fuses the training data set to obtain fusion characteristics, the secondary self-encoder secondarily encodes the fusion characteristics, and then a mapping relation is established between the secondarily encoded characteristics and the tag data; comprehensively training the convolutional neural network one-time self-encoder and the time sequence extrapolation predictor to obtain a trained time sequence extrapolation predictor; and predicting the existing data by using the trained time sequence extrapolation prediction model.
Description
Technical Field
The invention relates to degradation trend prediction of an electric hydraulic steering engine, in particular to parameter degradation time sequence extrapolation prediction of the electric hydraulic steering engine.
Background
The electric hydraulic steering engine system is a complex electromechanical integrated system, is a high-precision position servo system, and has important influence on the attitude control of the aircraft. Along with the continuous development of science and technology, advanced aircrafts widely adopt a full-digital servo steering engine system with high speed, high precision and large power-weight ratio. The contemporary engineering application puts higher demands on the reliability of the steering engine. The prediction of the degradation process of key parameters of the steering engine is an important aspect of the reliability research of the steering engine. The method has the advantages of accurately predicting the future time sequence of key parameters of the steering engine, grasping the change trend rule of the parameters, and having important significance for reasonably arranging maintenance plans, improving flight quality, guaranteeing flight safety, reducing the cost of the whole life cycle and the like. The traditional time-sequence extrapolation prediction method generally adopts a time sequence decomposition strategy, predicts by decomposing the time sequence into trend items, season items, residual items and the like, and finally fuses each prediction result to obtain a time-sequence extrapolation prediction sequence of the parameters. However, for a complex electromechanical system such as an electric hydraulic steering engine, the degradation process of the complex electromechanical system tends to be nonlinear, so that the time sequence of degradation parameters of the complex electromechanical system is difficult to effectively decompose according to a traditional method, and great difficulty is brought to future time sequence prediction of key parameters of the steering engine.
In order to solve the problem, an electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on an artificial feature and convolution feature secondary self-coding fusion mechanism is provided. According to the method, the characteristic fusion is realized by combining the artificial time domain characteristic and the convolution depth characteristic through a secondary self-coding mechanism, the time sequence dependency relationship and the change trend of the original parameters can be directly mapped into the hidden layer depth characteristic, the problem of sequence decomposition in the traditional method is avoided, and a more practical method is provided for the extrapolation prediction problem of the key parameter degradation time sequence of the electric hydraulic steering engine.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on a secondary self-coding fusion mechanism.
According to one aspect of the invention, an electro-hydraulic steering engine parameter degradation time-sequence extrapolation prediction method based on a secondary self-coding fusion mechanism is provided, and the method comprises the following steps: acquiring fault prediction data of an electric hydraulic steering engine; comprehensively preprocessing the fault data to obtain a training data set and a test data set; constructing a time sequence extrapolation predictor: the time sequence extrapolation predictor comprises a convolutional neural network primary self-encoder, an artificial time domain feature extractor based on expert knowledge and a SAE-based secondary self-encoder; the time sequence extrapolation predictor fuses the training data set to obtain fusion characteristics, the secondary self-encoder secondarily encodes the fusion characteristics, and then a mapping relation is established between the secondarily encoded characteristics and the tag data; comprehensively training the convolutional neural network one-time self-encoder and the time sequence extrapolation predictor to obtain a trained time sequence extrapolation predictor; and predicting the existing data by using the trained time sequence extrapolation prediction model.
Preferably, the time sequence extrapolation prediction model takes an original training data set as input, firstly carries out artificial feature extraction based on an artificial time domain feature extractor, carries out convolution feature extraction on the original training data set by utilizing a pre-trained convolution neural network feature extraction model, then carries out feature fusion on the convolution feature and the artificial time domain feature, and carries out feature fusion on a training data label S trainy And outputting the model as a time sequence extrapolation prediction model, thereby completing the training of an extrapolation predictor model.
Preferably, when predicting the existing data, for the input data with length W, the predicted data length is W-W, the data segment with length 2W-W in the existing data is cut and spliced with the predicted data, and the data segment is used as the input of a new round of prediction, and the predicted length L preset by human is known to be reached continuously and repeatedly p The prediction ends.
Preferably, the verification set data obtained through comprehensive pretreatment is sent into the time sequence extrapolation prediction model, and the prediction performance evaluation of the model is completed by combining corresponding prediction indexes.
Preferably, the comprehensive preprocessing step includes sliding window cutting of critical parameter time series data, where the critical parameter time series data is X, x= { X 1 ,x 2 ,...x N Sliding window cutting X to generate corresponding sample data set, when window width is W, stepWhen the length is s, the number of samples generated by cutting is as follows:
then the corresponding data set is generated as { S ] 1 ,S 2 ,...S sn For { S } 1,nor ,S 2,nor ,...S sn,nor Each sample S in } i,nor And taking the data with the length of W as training data, and taking the data with the length of W-W as prediction data corresponding to the training data.
Preferably, the construction of the convolutional neural network one-time self-encoder comprises based on the training data set S train ={S 1,nor ,S 2,nor ,...S n,nor The data format is converted into a three-dimensional data format (sn, w, 1), the constructed three-dimensional training data set is input into a primary self-encoder to repeatedly execute forward propagation and backward propagation iterative computation processes so as to continuously adjust model parameters of a convolution layer, a pooling layer and a full-connection layer of the constructed primary self-encoding model to finish the pre-training of the model, wherein { S } is that 1,nor ,S 2,nor ,...S sn,nor And n is the number of samples, w is the data length of each sample, and 1 is the number of channels.
Preferably, the one-time self-coding model comprises a plurality of convolution layers, a plurality of pooling layers and a flat full-connection layer, the full-connection layer performs feature recognition by using the features extracted by the convolution layers and pooling of the multi-layer stack, a sofimax regression is used on the full-connection layer, and the output of the sofimax function is that
Where k represents the number of output layer network nodes.
Preferably, the secondary encoder and decoder are further pre-trained in the SAE-based secondary self-encoder with a two-dimensional fusion feature matrix; and taking the two-dimensional fusion feature matrix as the input and output of the stacked secondary self-coding, selecting proper loss functions and iteration times, completing the forward propagation and reverse propagation iterative calculation process, enabling the model to reconstruct the input of the model continuously, and finally extracting the coding layer from the stacked secondary self-coding model after pre-training as an available secondary self-coding model.
Preferably, the depth fusion features are secondarily self-coded based on a secondarily self-coder model obtained by pre-training, thereby obtaining a secondarily coded feature set { F' 1 ,F′ 2 ,..,F′ sn }。
This summary is provided merely as an introduction to the subject matter that is fully described in the detailed description and the accompanying drawings. The summary should not be considered to describe essential features, nor should it be used to determine the scope of the claims. Furthermore, it is to be understood that both the foregoing summary and the following detailed description are exemplary and explanatory only and are not necessarily restrictive of the claimed subject matter.
Drawings
Various embodiments or examples ("examples") of the present disclosure are disclosed in the following detailed description and drawings. The drawings are not necessarily drawn to scale. In general, the operations of the disclosed methods may be performed in any order, unless otherwise specified in the claims. In the accompanying drawings:
FIG. 1 illustrates a flow chart of a feature extraction method based on artificial feature fusion with convolution features of a deep neural network in accordance with the present invention;
FIG. 1A illustrates a method of temporal extrapolation prediction based on the fusion features output by the method of FIG. 1;
FIG. 2 shows a schematic diagram of a method of obtaining fault prediction data for an electro-hydraulic steering engine in accordance with the present invention;
FIG. 3 shows a block diagram of a one-time self-encoding model based on convolutional neural networks in accordance with the present invention;
FIG. 4 illustrates a flow chart of the operation of the one-time self-encoding and artificial time domain feature extraction shown in FIG. 1;
FIG. 5 shows a schematic diagram of the architecture of the SAE-based secondary self-encoder shown in FIG. 1;
FIG. 6 shows a schematic diagram of feedback angle raw data;
FIG. 7A illustrates a maximum value of an artificial time domain feature obtained based on the flowchart shown in FIG. 4;
FIG. 7B illustrates standard deviations of the artificial time domain features obtained based on the flowchart of FIG. 4;
FIG. 8 is a predicted result obtained by a time-series extrapolation prediction method according to the present invention;
FIG. 9A is full prediction data;
fig. 9B is a partial enlarged view of part of the prediction data.
Detailed Description
Before explaining one or more embodiments of the disclosure in detail, it is to be understood that the embodiments are not limited in their application to the details of construction and to the steps or methods set forth in the following description or illustrated in the drawings.
The disclosed time sequence extrapolation prediction method is illustrated in the method flowcharts shown in fig. 1 and 1A, fig. 1 shows a feature extraction method flowchart based on fusion of artificial features and convolution features of a deep neural network according to the present invention, and fig. 1A shows a time sequence extrapolation prediction method based on the fusion features output by the method shown in fig. 1. The feature extraction method as shown in fig. 1 includes a plurality of steps: step 1: acquiring fault prediction data of an electric hydraulic steering engine; step 2: carrying out comprehensive pretreatment on fault data; step 3: performing characteristic extraction based on a convolutional neural network; step 4: carrying out manual time domain feature extraction based on expert knowledge on the training data set; step 5: performing feature stitching on the artificial features extracted based on experience knowledge and the high-dimensional hidden layer features extracted based on the CNN feature extraction model; step 6: and performing depth feature fusion based on stacked self-encoders, thereby obtaining secondary encoding feature values. The secondary coding feature values obtained in the step 6 are sent to a time sequence extrapolation prediction model training module shown in fig. 1A, and step 7 is executed: the training of the time sequence extrapolation prediction model specifically comprises the following steps: constructing a time sequence extrapolation prediction model; training a time-series extrapolation predictor model; and predicting using the extrapolated prediction period. The method for predicting the parameter degradation time-series extrapolation of the electric hydraulic steering engine based on the secondary self-coding fusion mechanism is described in detail below with reference to fig. 1 and 1A.
1. Obtaining fault prediction data of an electro-hydraulic steering engine
The situation that the real fault data of the product is difficult to obtain is widely existed due to the fact that the real test conditions and the actual use environment are limited. The simulation analysis is one of the main means for solving the problem of data shortage at home and abroad, and the extensive research results are based on simulation models to perform fault injection and acquire corresponding fault data. Therefore, in order to obtain the fault prediction data of the electric hydraulic steering engine, the electric hydraulic steering engine needs to be structurally modeled by using Simulink software and fault simulation is carried out. In operation, fault injection is performed by using the Simulink model, data close to the actual fault condition can be obtained to the maximum extent, and verification of a fault prediction model is realized.
Aiming at the fault prediction requirement of a steering engine system, firstly, modeling processing is carried out on the steering engine system structure, and then a steering engine control system simulation model is built by using Simulink simulation software and used for generating simulation data. On the basis of a steering engine control system simulation model, selecting a proper steering engine system simulation model fault injection point to perform fault injection and collecting simulation signals for development and verification of a steering engine control system fault prediction model. The method for obtaining the failure prediction data of the electro-hydraulic steering engine is shown in fig. 2.
Firstly, a steering engine control system model is structured according to the steering engine system fault prediction requirement. The steering engine system of the structured processing mainly comprises an energy system and a position servo system, as shown in a steering engine system structure analysis module in fig. 2, key components of the steering engine system mainly comprise: the hydraulic pressure power amplifier comprises a power amplifier combination, a direct current motor, an electrohydraulic servo valve, a hydraulic variable pump, an actuating cylinder, an operating mechanism, a feedback potentiometer, a high-pressure safety valve, a low-pressure safety valve, an oil filter, a mail box and the like.
Based on the structural processing of the steering engine control system model, a simulation model of the steering engine control system is built by using Simulink simulation software, and a fault injection point is determined based on the simulation model. The fault injection points may be selected based on the fault prediction needs and historical fault data, which are typically injected on the various components of the steering engine. In this application, for example, the feedback amplification factor may be selected as the fault injection point, and the fault injection may be performed on the feedback potentiometer. And finally, performing fault simulation and signal acquisition. The data collected by the signals can be various control instructions and state signals of a steering engine control system, which are time sequence data of key parameters of the electric steering engine, including, for example: control instruction, unified clock, displacement signal, feedback angle. The selected key parameter time sequence data form historical time sequence data of the parameter to be predicted. The feedback angle can effectively represent the health state of the steering engine system, so that the feedback angle can be selected as a follow-up predicted parameter.
2. Comprehensive preprocessing of fault data
The data obtained by the steering engine fault prediction data obtaining unit, such as a feedback angle signal, is sent to the fault data preprocessing unit for comprehensive processing, so as to obtain a training data set and a test data set, and specifically referring to the comprehensive data preprocessing module shown in fig. 1, the comprehensive data preprocessing module comprises:
step 1, sliding window cutting is carried out on key parameter time sequence data, and a sample data set is constructed;
the time sequence data of any key parameter of the electric steering engine acquired by the sensor is X, and X= { X 1 ,x 2 ,...x N Sliding window cuts are made to X to generate corresponding sample data sets. When the window width is W and the step length is s, the number of samples generated by cutting is:
then the corresponding data set is generated as { S ] 1 ,S 2 ,...S sn For { S } 1,nor ,S 2,nor ,...S sn,nor Each sample S in } i,nor Taking the length asAnd taking the data of W as training data, and taking the data of W-W length as prediction data corresponding to the training data.
Step 2, carrying out maximum and minimum value normalization processing on the training data set;
in order to improve the data expression capability and accelerate the convergence rate of the training of the subsequent model, the training data set needs to be normalized, and the amplitude of the original parameters is scaled mainly by a maximum and minimum normalization method to complete the linear transformation of the data. For a single sample data S i ={x 1 ,x 2 ,...x w -by the formula:
implementing normalization processing to obtain normalized sample data set { S } 1,nor ,S 2,nor ,...S sn,nor }。
Step 3, constructing a training data set and a test data set;
the data of the first r% are selected from all the data to be used as a training data set, and the rest data are used as a test data set to verify the prediction performance of the model. Generally, r is generally 60 to 80, preferably 70.
3. Feature extraction based on convolutional neural network
The training data set obtained after being processed by the comprehensive data preprocessing module is respectively sent to a convolutional neural network primary self-encoder and an artificial time domain feature extraction module based on expert knowledge to obtain convolutional features and artificial characteristic time domain features. The feature extraction step specifically includes: a primary self-coding model is built based on a convolutional neural network, as shown in fig. 3; and the pre-training of the convolutional once self-encoder model as shown in fig. 4 and the convolutional feature extraction using a convolutional encoder.
First, a Convolutional Neural Network (CNN) -based one-time self-coding model is constructed using a training data set, and pre-training of the model is performed using the training data set. Due to the two-dimensional convolutional neural network for input dataThe format requirements are three-dimensional data, so a training dataset needs to be constructed. The training set is constructed to meet the input requirement of a two-dimensional one-time self-coding model by training a data set S train ={S 1,nor ,S 2,nor ,...S n,nor The data format of } is converted to (sn, w, 1), where sn is the number of samples, w is the data length of each sample, and 1 is the number of channels. The constructed training sample data set is input into a Convolutional Neural Network (CNN) -based one-time self-coding model shown in fig. 3.
The convolutional neural network (Convolutional Neural Network) is a multi-layer supervised learning neural network, and a convolutional layer and a pool sampling layer of an implicit layer are core parts for realizing the characteristic extraction function of the convolutional neural network. The CNN is a neural network specially used for processing data with similar network structure, and is used for extracting characteristics of original data by simulating a biological vision operation mechanism, and the characteristics of weight sharing among different CNN layers are provided, so that the complexity of the network is effectively reduced, the overfitting problem caused by too small data quantity is avoided, and the complexity of data reconstruction during multi-dimensional data characteristic extraction is avoided. As shown in fig. 3, the deep convolutional neural network of the present invention includes a plurality of convolutional layers, a plurality of pooling layers, and a flat full-connection layer.
Convolution layer: the convolution process with nonlinear activation can be described as:
wherein,is the output of the nth convolution kernel in the nth convolution layer,/and->Is the mth output eigenvector in the (r-1) th convolution layer, representing the convolution operation,/>Respectively representing the weight and offset of the nth convolution kernel in the nth convolution layer, and ReLU represents a nonlinear activation function.
Pooling layer: the spatial dimension of the convolution characteristics can be reduced by adding a pooling layer, and overfitting is avoided. The maximum pooling layer is the most common pooling layer, which takes only the most important part of the input (the highest value) and can be expressed as
Wherein the method comprises the steps ofIs a feature obtained by a convolution layer, < >>Is the output of the pooling layer, l represents the length of the pooling operation area.
Full tie layer: with the multi-layered stacked convolutional layers and pooled extracted features, the final input fully connected layers are used for feature recognition, typically using softmax regression on the top fully connected layers. Defining the output of the softmax function as
Where k represents the number of output layer network nodes.
The convolution layer extracts different characteristics of input data in a time domain by using a certain number of convolution kernels, the size of a parameter matrix can be effectively reduced by the pooling layer, so that the number of parameters in a last connection layer is reduced, the calculation speed can be increased and the model is prevented from being overfitted by adding the pooling layer, and finally, the characteristic parameters in a high-dimensional hidden layer are mapped to the original input data by using the full connection layer, so that the characteristic extraction capacity of the model is trained.
Secondly, selecting proper iteration times and a loss function, inputting the constructed three-dimensional training data set into a feature extraction model, and repeatedly executing forward propagation and backward propagation iterative computation processes; in the process, model parameters of the convolution layer, the pooling layer and the full-connection layer are continuously adjusted to finish the pre-training of the model.
And taking out two convolutional layers, two pooling layers and one full-connection layer of the pre-training model, reserving weight parameters of the two convolutional layers and the two pooling layers, and constructing the weight parameters into a trained one-time self-coding model of the deep convolutional neural network.
Finally, training data set { S > based on convolutional neural network primary self-coding model completing pre-training 1,nor ,S 2,nor ,...S sn,nor Performing convolution feature extraction to obtain a convolution feature set { F } 1,CNN ,F 2,CNN ,...,F sn,CNN }。
4. Expert knowledge-based artificial time domain feature extraction of training data sets
As shown in the right-hand diagram of fig. 4, the cut training data set S train ={S 1,nor ,S 2,nor ,...S n,nor Expert knowledge based time domain feature extraction. The method specifically comprises the steps of carrying out sliding window cutting on normalized training data, and extracting different time domain data from each cut sample and carrying out normalization processing on time domain features.
Sliding window cutting is performed on normalized sample data: window length w', step size 1, for sample S i ={x 1 ,x 2 ,...x w The method can cut out w-w ' +1 samples, wherein the length of each sample is w ', and the { S } is obtained ' 1 ,S′ 2 ,...S′ w-w′+1 }。
For each sample S' i Respectively extracting eight time domain features of maximum value, standard deviation, variance, waveform factor, root mean square, pulse index, margin factor and peak factor: for window data S' i The extracted time domain features are F i ={f 1 ,f 2 ,...,f 8 Thus for sample S i The extracted artificial features are { F 1 ,F 2 ,...F w-w′+1 Artificial feature extraction using maximum and minimum value normalizationThe line normalization processing may refer to step 2 in the integrated data preprocessing specifically.
5. Performing feature stitching on artificial features extracted based on expert knowledge and high-dimensional hidden layer features extracted based on CNN feature extraction model
With continued reference to fig. 1 and 4, after the CNN feature extraction model extracts the high-dimensional hidden layer features and the artificial time domain features, the high-dimensional hidden layer features and the artificial time domain features are subjected to feature stitching. The feature matrix extracted by the CNN feature extraction model is M CNN The shape and the size areWherein n is f For the number of convolution kernels, s f For the convolution kernel step size, f is the convolution kernel size. The artificial time domain feature matrix extracted by the artificial feature extraction module is M manual The shape and size are (w-w' +1, 8). Respectively two characteristic dimensions M CNN ,M manual Flattening the flat and splicing along the column direction, wherein the obtained fusion characteristics have the following dimensions:
for training data set S train ={S 1,nor ,S 2,nor ,...S n,nor Each sample S in } i,nor And performing CNN feature extraction and artificial feature extraction and performing feature fusion. Let the dimension of the fusion feature be n merge The training data set may be reorganized into (n, n) merge ) And takes the fusion feature matrix as the input of the following SAE coding model.
6. Depth feature fusion based on stacked self-encoder (SAE)
Referring to fig. 5, fig. 5 is a schematic diagram of the SAE-based secondary self-encoder shown in fig. 1. Here, the secondary self-encoder is used for performing depth feature fusion based on stacked self-encoders, and specifically comprises the steps of constructing the secondary self-encoder and the decoder, training the secondary encoder and the decoder and performing depth feature fusion by using the stacked secondary self-encoder.
First, a stacked secondary self-encoder and decoder model is constructed, the model structure of which is shown in fig. 5, and the number of encoding layers is the same as the number of decoding layers, so that the model has better secondary encoding capability on depth characteristics.
And (3) performing pre-training of the secondary self-encoder model by utilizing the two-dimensional fusion feature matrix obtained in the step (five), taking the two-dimensional fusion feature matrix as input and output of the stacked secondary self-encoder model, selecting proper loss functions and iteration times, completing forward propagation and reverse propagation iterative calculation processes, enabling the model to reconstruct own input continuously, and finally extracting an encoding layer in the stacked secondary self-encoder model after the pre-training as an available secondary self-encoder model.
Finally, performing secondary self-coding on the depth fusion characteristic based on a secondary self-coder model obtained through pre-training, thereby obtaining a secondary coding characteristic set { F' 1 ,F′ 2 ,..,F′ sn }。
7. Training a predictive model for time-series extrapolation
The block diagram shown in FIG. 1A illustrates the steps and methods performed for training a time-series extrapolation prediction model, including in particular, building a time-series extrapolation prediction model, training an extrapolation predictor model, and predicting using an extrapolation prediction period.
Step 7.1: and constructing a time sequence extrapolation predictor by utilizing a convolutional neural network primary self-encoder obtained in a characteristic extraction process based on the convolutional neural network and a stacked secondary self-encoder model obtained in a secondary self-encoding process based on SAE, wherein the extrapolation predictor fuses CNN characteristics and artificial characteristics and performs secondary encoding on depth characteristics, and then establishes a mapping relation between the secondary encoding characteristics and tag data so as to complete extrapolation prediction.
Step 7.2: and comprehensively training a CNN convolution feature extractor and a time sequence extrapolation predictor. The time sequence extrapolation prediction model takes original input data as input, firstly carries out artificial feature extraction and utilizes C after pretrainingThe NN feature extraction model performs CNN convolution feature extraction on the original data, performs feature fusion on the CNN convolution feature and the artificial time domain feature, and tags the training data S trainy And outputting the model as a time sequence extrapolation prediction model, thereby completing the training of an extrapolation predictor model.
Step 7.3, predicting the existing data by using the trained time sequence extrapolation prediction model, for the input data with the length of W, the predicted data length of W-W, cutting out the data segment with the length of 2W-W in the existing data, and splicing the data segment with the predicted data, thereby being used as the input of a new round of prediction, and continuously iterating until the predicted length L which is manually preset is reached p The prediction ends.
And 7.4, sending the verification set data obtained by the comprehensive data preprocessing unit into a time sequence extrapolation prediction model, and combining corresponding prediction indexes to complete prediction performance evaluation of the model.
[ feature extraction example based on feature fusion ]
The invention creatively designs a feature extraction method based on fusion of artificial features and convolution features of a deep neural network, and the method directly influences degradation trend prediction and health assessment of a hydraulic actuation system of an extrapolation prediction model. Based on the above, the feedback angle data collected by the measuring point of the flow injection point is selected for illustration by using the fault of 'leak in an actuator cylinder' of a steering engine system.
The structural model of the electric hydraulic steering engine is shown in fig. 2, the fault prediction is set as 'intra-cylinder leakage' fault, and the data is feedback angle time domain data. After the feedback angle data is obtained, the data is preprocessed. In this case, the window length is 9000, the step length is 1, for each window data, the front 6000 length data is used as the input of the extrapolation prediction model based on the convolutional neural network sequence, and the rear 3000 length data is used as the label data of the window, namely the prediction data. For all normalized feedback angle data, the first 70% of data is selected as a training data set, and the remaining 30% of data is used as a verification data set for verifying the model predictive performance.
1. After obtaining training data, feature extraction based on convolutional neural network is carried out
And taking the parameter characteristics of steering engine feedback angle data into consideration, and extracting sample characteristics of the normalized sample data by using a convolutional neural network. With continued reference to fig. 1, 3 and 4, the convolution layer extracts different features of input data in a time domain by using a certain number of convolution kernels, the size of a parameter matrix can be effectively reduced by the pooling layer, so that the number of parameters in a final connection layer is reduced, the pooling layer is added to accelerate the calculation speed and prevent model overfitting, the two-layer convolution layer is utilized to map original data to a high-dimensional implicit space so as to learn nonlinear features of the data, then the flattening layer and the full connection layer are combined to remap the features of the high-dimensional sample to the original input data so as to learn key features of the original sample, a module for mapping the original data sample to a low-dimensional feature space is selected as an encoder of the model, and a module for extracting the reconstructed samples of the screened features is selected as a decoder of the model. The selected model structure parameters of the invention are shown in table 1
TABLE 1 convolutional neural network based primary self-encoder model parameters
And selecting proper iteration times and a loss function, inputting the constructed three-dimensional training data set into a feature extraction model, repeatedly executing forward propagation and backward propagation iterative computation processes, continuously adjusting model parameters of a convolution layer, a pooling layer and a full-connection layer to finish pre-training of the model, taking out two convolution layers, two pooling layers and one full-connection layer of the pre-training model, retaining weight parameters of the two convolution layers, the two pooling layers and the one full-connection layer, and constructing the model into the CNN feature extraction model.
2. Expert knowledge-based time domain feature extraction of a segmented training dataset
Specifically, with the window length of 3000 and the step length of 3000, the training data of each window is cut, and the data of each sub-window is subjected to artificial feature extraction, and the feature extraction result is shown in fig. 7A and 7B.
3. Performing feature stitching on artificial features extracted based on expert knowledge and high-dimensional hidden layer features extracted based on CNN feature extraction model
4. Depth feature fusion based on stacked self-encoders
And (3) pre-training the secondary self-encoder model by using the two-dimensional fusion feature matrix, taking the two-dimensional fusion feature matrix as input and output of the stacked secondary self-encoder model, selecting proper loss functions and iteration times, completing forward propagation and reverse propagation iterative calculation processes, enabling the model to continuously reconstruct self-input, and finally extracting a coding layer in the stacked secondary self-encoder model after pre-training as an available secondary self-encoding model.
5. Time series extrapolation prediction model training
And constructing a time sequence extrapolation predictor by utilizing a primary self-encoder and a stacked secondary self-encoder model of the convolutional neural network obtained through pre-training, wherein the extrapolation predictor fuses CNN features and artificial features and performs secondary encoding on depth features, and then establishes a mapping relation between the secondary encoding features and tag data to complete extrapolation prediction. And predicting the existing data by using the trained prediction model, splicing the extrapolated data with the original data to be used as input of a new round of prediction, and continuously iterating until the artificial preset prediction length is reached, wherein the prediction is finished, the prediction result is shown in fig. 8, and the comparison result of the prediction result and the real label is shown in fig. 9A and 9B.
Although the invention has been described with reference to the embodiments shown in the drawings, equivalent or alternative means may be used without departing from the scope of the claims. The components described and illustrated herein are merely examples of systems/devices and methods that may be used to implement embodiments of the present disclosure and may be replaced with other devices and components without departing from the scope of the claims.
Claims (9)
1. An electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on a secondary self-coding fusion mechanism comprises the following steps:
acquiring fault prediction data of an electric hydraulic steering engine;
comprehensively preprocessing the fault data to obtain a training data set and a test data set;
constructing a time sequence extrapolation predictor, wherein:
the time sequence extrapolation predictor comprises a convolutional neural network primary self-encoder, an artificial time domain feature extractor based on expert knowledge and a SAE-based secondary self-encoder;
the time sequence extrapolation predictor fuses the training data set to obtain fusion characteristics, the secondary self-encoder secondarily encodes the fusion characteristics, and then a mapping relation is established between the secondarily encoded characteristics and the tag data;
comprehensively training the convolutional neural network one-time self-encoder and the time sequence extrapolation predictor to obtain a trained time sequence extrapolation predictor; and
and predicting the existing data by using the trained time sequence extrapolation prediction model.
2. The method for predicting the parameter degradation time-series extrapolation of an electric hydraulic steering engine according to claim 1, wherein the time-series extrapolation prediction model takes an original training data set as input, performs artificial feature extraction based on an artificial time-domain feature extractor, performs convolutional feature extraction on the original training data set by using a pretrained convolutional neural network feature extraction model, performs feature fusion on the convolutional feature and the artificial time-domain feature, and tags training data S trainy And outputting the model as a time sequence extrapolation prediction model, thereby completing the training of an extrapolation predictor model.
3. According to claimThe method for predicting the parameter degradation time sequence extrapolation of the electric hydraulic steering engine as set forth in claim 1, wherein when predicting the existing data, for the input data with the length W, the predicted data length is W-W, the data segment with the length of 2W-W in the existing data is intercepted and spliced with the predicted data, and the data segment is used as the input of a new round of prediction, and the predicted length L reaching the artificial preset is known by continuous reciprocating iteration p The prediction ends.
4. The method for predicting the time-series extrapolation of the parameter degradation of the electric hydraulic steering engine according to claim 1, wherein verification set data obtained through comprehensive pretreatment is sent into the time-series extrapolation prediction model, and the prediction performance evaluation of the model is completed by combining corresponding prediction indexes.
5. The method for predicting the parameter degradation timing extrapolation of an electric hydraulic steering engine of claim 1, wherein the step of comprehensively preprocessing includes sliding window cutting of critical parameter timing data, the critical parameter timing data being X, x= { X 1 ,x 2 ,...x N And performing sliding window cutting on the X to generate a corresponding sample data set, wherein when the window width is W and the step length is s, the number of samples generated by cutting is:
then the corresponding data set is generated as { S ] 1 ,S 2 ,...S sn For { S } 1,nor ,S 2,nor ,...S sn,nor Each sample S in } i,nor And taking the data with the length of W as training data, and taking the data with the length of W-W as prediction data corresponding to the training data.
6. The method for extrapolating and predicting parameter degradation timing of an electro-hydraulic steering engine as set forth in claim 1, wherein said constructing a convolutional neural network one-time self-encoder comprises based on said trainingTraining data set S train ={S 1,nor ,S 2,nor ,...S n,nor The data format is converted into a three-dimensional data format (sn, w, 1), the constructed three-dimensional training data set is input into a primary self-encoder to repeatedly execute forward propagation and backward propagation iterative computation processes so as to continuously adjust model parameters of a convolution layer, a pooling layer and a full-connection layer of the constructed primary self-encoding model to finish the pre-training of the model, wherein { S } is that 1,nor ,S 2,or ,...S sn,nor And n is the number of samples, w is the data length of each sample, and 1 is the number of channels.
7. The method of claim 6, wherein the one-time self-encoding model comprises a plurality of convolutional layers, a plurality of pooled layers, and a fully-connected layer of platten, the fully-connected layer performing feature recognition using features extracted by the convolutional layers and pooled of the multi-layer stack, a softmax regression being used on the fully-connected layer, the output of the softmax function being
Where k represents the number of output layer network nodes.
8. The method for electro-hydraulic steering engine parameter degradation timing extrapolation prediction as set forth in claim 1, further pre-training the secondary encoder and decoder in a two-dimensional fusion feature matrix in the SAE-based secondary self-encoder; and taking the two-dimensional fusion feature matrix as the input and output of the secondary self-coding, selecting proper loss functions and iteration times, completing the forward propagation and reverse propagation iterative calculation process, enabling the model to reconstruct the input of the model continuously, and finally extracting a coding layer in the model from the pre-trained secondary self-coding model to serve as an available secondary self-coding model.
9. The method for extrapolating and predicting parameter degradation time sequence of an electric hydraulic steering engine according to claim 8, wherein the depth fusion feature is subjected to secondary self-coding based on a secondary self-coder model obtained by pre-training, so as to obtain a secondary coding feature set { F' 1 ,F′ 2 ,...,F′ sn }。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110824289.9A CN113536682B (en) | 2021-07-21 | 2021-07-21 | Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110824289.9A CN113536682B (en) | 2021-07-21 | 2021-07-21 | Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113536682A CN113536682A (en) | 2021-10-22 |
CN113536682B true CN113536682B (en) | 2024-01-23 |
Family
ID=78100684
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110824289.9A Active CN113536682B (en) | 2021-07-21 | 2021-07-21 | Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113536682B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114063601A (en) * | 2021-11-12 | 2022-02-18 | 江苏核电有限公司 | Equipment state diagnosis system and method based on artificial intelligence |
CN114399066B (en) * | 2022-01-15 | 2023-04-18 | 中国矿业大学(北京) | Mechanical equipment predictability maintenance system and maintenance method based on weak supervision learning |
CN116776228B (en) * | 2023-08-17 | 2023-10-20 | 合肥工业大学 | Power grid time sequence data decoupling self-supervision pre-training method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109634139A (en) * | 2018-12-10 | 2019-04-16 | 中国航天空气动力技术研究院 | Hypersonic aircraft navigation and control system semi-matter simulating system and method |
CN112257760A (en) * | 2020-09-30 | 2021-01-22 | 北京航空航天大学 | Method for detecting abnormal network communication behavior of host based on time sequence die body |
CN113035280A (en) * | 2021-03-02 | 2021-06-25 | 四川大学 | RBP binding site prediction algorithm based on deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11531802B2 (en) * | 2019-10-18 | 2022-12-20 | Taiwan Semiconductor Manufacturing Company Ltd. | Layout context-based cell timing characterization |
-
2021
- 2021-07-21 CN CN202110824289.9A patent/CN113536682B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109634139A (en) * | 2018-12-10 | 2019-04-16 | 中国航天空气动力技术研究院 | Hypersonic aircraft navigation and control system semi-matter simulating system and method |
CN112257760A (en) * | 2020-09-30 | 2021-01-22 | 北京航空航天大学 | Method for detecting abnormal network communication behavior of host based on time sequence die body |
CN113035280A (en) * | 2021-03-02 | 2021-06-25 | 四川大学 | RBP binding site prediction algorithm based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN113536682A (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113536682B (en) | Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism | |
CN113536681B (en) | Electric steering engine health assessment method based on time sequence extrapolation prediction | |
CN112149316B (en) | Aero-engine residual life prediction method based on improved CNN model | |
Ke et al. | Short-term electrical load forecasting method based on stacked auto-encoding and GRU neural network | |
CN113536683B (en) | Feature extraction method based on fusion of artificial features and convolution features of deep neural network | |
Liu et al. | A novel deep learning-based encoder-decoder model for remaining useful life prediction | |
CN112838946B (en) | Method for constructing intelligent sensing and early warning model based on communication network faults | |
CN115545321A (en) | On-line prediction method for process quality of silk making workshop | |
CN113836783B (en) | Digital regression model modeling method for main beam temperature-induced deflection monitoring reference value of cable-stayed bridge | |
CN112329172A (en) | Shield tunneling machine cutter head torque prediction method and system based on parallel neural network | |
CN116680105A (en) | Time sequence abnormality detection method based on neighborhood information fusion attention mechanism | |
CN113221458B (en) | Multi-step prediction method and system for shield cutter head torque | |
Wang et al. | Stock market prediction using artificial neural networks based on HLP | |
CN115840893A (en) | Multivariable time series prediction method and device | |
CN115905848A (en) | Chemical process fault diagnosis method and system based on multi-model fusion | |
CN116843057A (en) | Wind power ultra-short-term prediction method based on LSTM-ViT | |
CN116007937A (en) | Intelligent fault diagnosis method and device for mechanical equipment transmission part | |
CN112347531A (en) | Three-dimensional crack propagation path prediction method and system for brittle marble | |
CN117293790A (en) | Short-term power load prediction method considering prediction error uncertainty | |
CN116826727B (en) | Ultra-short-term wind power prediction method and prediction system based on time sequence representation and multistage attention | |
CN118296452A (en) | Industrial equipment fault diagnosis method based on transducer model optimization | |
CN113221450A (en) | Dead reckoning method and system for sparse and uneven time sequence data | |
CN113283642A (en) | Poultry feed detection and formula system | |
CN117852686A (en) | Power load prediction method based on multi-element self-encoder | |
CN112232570A (en) | Forward active total electric quantity prediction method and device and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |