CN113536682A - Electro-hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism - Google Patents

Electro-hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism Download PDF

Info

Publication number
CN113536682A
CN113536682A CN202110824289.9A CN202110824289A CN113536682A CN 113536682 A CN113536682 A CN 113536682A CN 202110824289 A CN202110824289 A CN 202110824289A CN 113536682 A CN113536682 A CN 113536682A
Authority
CN
China
Prior art keywords
data
model
extrapolation
self
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110824289.9A
Other languages
Chinese (zh)
Other versions
CN113536682B (en
Inventor
马剑
邹新宇
周安
张聪
张统
丁宇
吕琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110824289.9A priority Critical patent/CN113536682B/en
Publication of CN113536682A publication Critical patent/CN113536682A/en
Application granted granted Critical
Publication of CN113536682B publication Critical patent/CN113536682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/04Ageing analysis or optimisation against ageing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method for predicting parameter degradation time series extrapolation of an electro-hydraulic steering engine based on a secondary self-coding fusion mechanism, which comprises the following steps: acquiring fault prediction data of the electro-hydraulic steering engine; comprehensively preprocessing the fault data to obtain a training data set and a test data set; constructing a time sequence extrapolation predictor: the time sequence extrapolation predictor comprises a convolutional neural network primary self-encoder, an expert knowledge-based artificial time domain feature extractor and an SAE-based secondary self-encoder; the time sequence extrapolation predictor fuses the training data set to obtain fusion characteristics, the secondary self-encoder performs secondary encoding on the fusion characteristics, and then a mapping relation is established between the secondary encoding characteristics and label data; comprehensively training the convolutional neural network primary self-encoder and the time sequence extrapolation predictor to obtain a trained time sequence extrapolation predictor; and predicting the existing data by using the trained time sequence extrapolation prediction model.

Description

Electro-hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism
Technical Field
The invention relates to the prediction of degradation trend of an electro-hydraulic steering engine, in particular to the extrapolation prediction of the parameter degradation time sequence of the electro-hydraulic steering engine.
Background
The electro-hydraulic steering engine system is a complex electromechanical integrated system and a high-precision position servo system, and has important influence on attitude control of an aircraft. With the continuous development of science and technology, the advanced aircraft widely adopts a full digital servo steering engine system with high speed, high precision and large power-weight ratio. The current engineering application puts higher requirements on the reliability of the steering engine. The prediction of the degradation process of key parameters of the steering engine is an important aspect of the reliability research of the steering engine. The method has the advantages that future time sequences of key parameters of the steering engine are accurately predicted, parameter change trend rules are mastered, and the method has important significance for reasonably arranging maintenance plans, improving flight quality, guaranteeing flight safety, reducing life cycle cost and the like. The traditional time series extrapolation prediction method usually adopts a time series decomposition strategy, and carries out prediction by decomposing a time series into a trend term, a season term, a residual term and the like, and finally obtains a time series extrapolation prediction sequence of parameters by fusing prediction results. However, for a complex electromechanical system such as an electro-hydraulic steering engine, the degradation process of the complex electromechanical system is often nonlinear, so that the time sequence of the degradation parameters of the complex electromechanical system is often difficult to effectively decompose according to a traditional method, and great difficulty is brought to the problem of future time sequence prediction of key parameters of the steering engine.
In order to solve the problem, an electro-hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on an artificial feature and convolution feature secondary self-coding fusion mechanism is provided. The method combines the artificial time domain characteristics and the convolution depth characteristics, realizes characteristic fusion through a secondary self-coding mechanism, can directly map the time sequence dependency relationship and the variation trend of the original parameters into the hidden layer depth characteristics, avoids the problem of sequence decomposition in the traditional method, and provides a more practical method for the extrapolation prediction problem of the key parameter degradation time sequence of the electro-hydraulic steering engine.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an electro-hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on a secondary self-coding fusion mechanism.
According to one aspect of the invention, an electro-hydraulic steering engine parameter degradation time series extrapolation prediction method based on a secondary self-coding fusion mechanism is provided, and the method comprises the following steps: acquiring fault prediction data of the electro-hydraulic steering engine; comprehensively preprocessing the fault data to obtain a training data set and a test data set; constructing a time sequence extrapolation predictor: the time sequence extrapolation predictor comprises a convolutional neural network primary self-encoder, an expert knowledge-based artificial time domain feature extractor and an SAE-based secondary self-encoder; the time sequence extrapolation predictor fuses the training data set to obtain fusion characteristics, the secondary self-encoder performs secondary encoding on the fusion characteristics, and then a mapping relation is established between the secondary encoding characteristics and label data; comprehensively training the convolutional neural network primary self-encoder and the time sequence extrapolation predictor to obtain a trained time sequence extrapolation predictor; and predicting the existing data by using the trained time sequence extrapolation prediction model.
Preferably, the time-series extrapolation prediction model takes an original training data set as input, firstly performs artificial feature extraction based on an artificial time domain feature extractor, performs convolution feature extraction on the original training data set by using a pre-trained convolution neural network feature extraction model, then performs feature fusion on the convolution feature and the artificial time domain feature, and labels S of the training datatrainyAnd outputting the time-series extrapolation prediction model to finish the training of the extrapolation predictor model.
Preferably, when the existing data is predicted, for input data with the length of W, the predicted data length is W-W, a data segment with the length of 2W-W in the existing data is intercepted and spliced with the predicted data to serve as the input of a new round of prediction, and the fact that the predicted length L which is artificially preset is reached is known through continuous iteration and repetitionpAnd predicting to end.
Preferably, the verification set data obtained through comprehensive preprocessing is sent to the time series extrapolation prediction model, and the prediction performance evaluation of the model is completed by combining with the corresponding prediction index.
Preferably, the step of comprehensive preprocessing includes performing sliding window cutting on the time series data of the key parameter, where X is { X ═ X-1,x2,...xNAnd performing sliding window cutting on the X to generate a corresponding sample data set, wherein when the window width is W and the step length is s, the number of samples generated by cutting is as follows:
Figure BDA0003173094030000021
then the corresponding data set is generated as S1,S2,...SsnIs to { S }1,nor,S2,nor,...Ssn,norEvery sample S ini,norTaking data with the length of W as training data, and taking data with the length of W-W as prediction data corresponding to the training data.
Preferably, the construction of the convolutional neural network one-time auto-encoder comprises a step of encoding the training data set Strain={S1,nor,S2,nor,...Sn,norConverting the data format into a three-dimensional data format (sn, w, 1), inputting the constructed three-dimensional training data set into a one-time self-encoder to repeatedly execute a forward propagation iterative computation process and a backward propagation iterative computation process so as to participate in model parameters of a convolution layer, a pooling layer and a full connection layer of the constructed one-time self-encoding modelThe number is continuously adjusted to complete the pre-training of the model, where { S }1,nor,S2,nor,...Ssn,norAnd the symbol is a sample data set subjected to normalization processing, sn is the number of samples, w is the data length of each sample, and 1 is the number of channels.
Preferably, the primary self-coding model comprises a plurality of convolutional layers, a plurality of pooling layers and a Flatten fully-connected layer, the fully-connected layer performs feature identification by using the features of convolutional layers and pooling extraction of the multi-layer stack, sofimax regression is used on the fully-connected layer, and the output of the sofimax function is
Figure BDA0003173094030000031
Where k represents the number of output-layer network nodes.
Preferably, the secondary encoder and the decoder are pre-trained in the secondary SAE-based self-encoder by a two-dimensional fusion feature matrix; and taking the two-dimensional fusion characteristic matrix as the input and output of the stacked secondary self-coding, selecting a proper loss function and iteration times, completing the iterative computation process of forward propagation and backward propagation, continuously reconstructing the self-input of the model, and finally extracting the coding layer from the stacked secondary self-coding model which is pre-trained as an available secondary self-coding model.
Preferably, the depth fusion feature is secondarily self-coded based on a secondary self-coder model obtained through pre-training, so that a secondary coding feature set { F'1,F′2,..,F′sn}。
This summary is provided merely as an introduction to the subject matter that is fully described in the detailed description and the accompanying drawings. This summary should not be construed to describe essential features nor should it be used to determine the scope of the claims. Furthermore, it is to be understood that both the foregoing summary of the invention and the following detailed description are exemplary and explanatory only and are not necessarily restrictive of the subject matter claimed.
Drawings
Various embodiments or examples ("examples") of the disclosure are disclosed in the following detailed description and accompanying drawings. The drawings are not necessarily to scale. In general, the operations of the disclosed methods may be performed in any order, unless otherwise specified in the claims. In the drawings:
FIG. 1 is a flow chart of a feature extraction method based on the fusion of artificial features and convolution features of a deep neural network according to the invention;
FIG. 1A illustrates a time-series extrapolation prediction method based on a fused feature output by the method of FIG. 1;
FIG. 2 shows a schematic diagram of a method for obtaining electro-hydraulic steering engine fault prediction data according to the invention;
FIG. 3 illustrates a graph of a primary self-encoding model structure based on a convolutional neural network according to the present invention;
FIG. 4 illustrates a flow chart of the operation of the primary self-encoding and artificial time domain feature extraction shown in FIG. 1;
FIG. 5 is a schematic diagram illustrating the structure of the SAE-based secondary auto-encoder shown in FIG. 1;
FIG. 6 shows a feedback angle raw data diagram;
FIG. 7A illustrates a maximum value of an artificial temporal feature obtained based on the flowchart shown in FIG. 4;
FIG. 7B illustrates a standard deviation of an artificial temporal feature obtained based on the flowchart shown in FIG. 4;
FIG. 8 is a prediction result from a time series extrapolation prediction method according to the present invention;
FIG. 9A is the total prediction data;
fig. 9B is a partially enlarged view of part of the prediction data.
Detailed Description
Before explaining one or more embodiments of the present disclosure in detail, it is to be understood that the embodiments are not limited in their application to the details of construction and to the procedures or methods set forth in the following description or illustrated in the drawings.
The time-series extrapolation prediction method disclosed by the invention is illustrated in a method flow chart shown in fig. 1 and fig. 1A, wherein fig. 1 shows a flow chart of a feature extraction method based on fusion of artificial features and convolution features of a deep neural network, and fig. 1A shows a time-series extrapolation prediction method based on the fusion features output by the method shown in fig. 1. The feature extraction method shown in fig. 1 includes a plurality of steps: step 1: acquiring fault prediction data of the electro-hydraulic steering engine; step 2: comprehensively preprocessing fault data; and step 3: performing feature extraction based on a convolutional neural network; and 4, step 4: carrying out artificial time domain feature extraction based on expert knowledge on a training data set; and 5: performing feature splicing on the artificial features extracted based on the empirical knowledge and the high-dimensional hidden layer features extracted based on the CNN feature extraction model; step 6: and performing depth feature fusion based on the stacked self-encoder to obtain a secondary encoding feature value. The secondary coding feature values obtained in step 6 are sent to a time series extrapolation prediction model training module shown in fig. 1A, and step 7 is executed: training a time series extrapolation prediction model, wherein the step 7 specifically comprises: constructing a time sequence extrapolation prediction model; training a time sequence extrapolation predictor model; and predicting using the extrapolated prediction period. The method for predicting the degradation time series extrapolation of the parameters of the electrohydraulic steering engine based on the quadratic self-coding fusion mechanism will be described in detail with reference to fig. 1 and 1A.
Firstly, acquiring fault prediction data of an electro-hydraulic steering engine
The method is limited by practical test conditions and practical use environments, and the situation that the real fault data of the product is difficult to obtain widely exists. Simulation analysis is one of the main means for solving the problem of data shortage at home and abroad, and the extensive research results are that fault injection is carried out based on a simulation model to obtain corresponding fault data. Therefore, in order to obtain the fault prediction data of the electro-hydraulic steering engine, structural modeling and fault simulation of the electro-hydraulic steering engine are required to be carried out by using Simulink software. In operation, the Simulink model is used for fault injection, data close to the real fault condition can be obtained to the maximum extent, and fault prediction model verification is achieved.
Aiming at the fault prediction requirement of the steering engine system, firstly modeling the structure of the steering engine system, and then building a simulation model of the steering engine control system by using Simulink simulation software for generating simulation data. On the basis of the steering engine control system simulation model, a proper steering engine system simulation model fault injection point is selected for fault injection and simulation signals are collected for development and verification of a steering engine control system fault prediction model. The method for acquiring the fault prediction data of the electro-hydraulic steering engine is shown in figure 2.
Firstly, aiming at the fault prediction requirement of a steering engine system, a steering engine control system model is subjected to structural processing. The steering engine system of structuralization processing mainly contains energy system and position servo, as shown in the steering engine system structure analysis module in fig. 2, its key component part mainly includes: the device comprises a power amplifier assembly, a direct current motor, an electro-hydraulic servo valve, a hydraulic variable pump, an actuating cylinder, an operating mechanism, a feedback potentiometer, a high-pressure safety valve, a low-pressure safety valve, an oil filter, a mailbox and the like.
On the basis of carrying out structural processing on a steering engine control system model, a steering engine control system simulation model is built by using Simulink simulation software, and a fault injection point is determined on the basis of the simulation model. The fault injection points can be selected according to fault prediction needs and historical fault data, and are generally injected on each component of the steering engine. In the present application, for example, a feedback amplification factor may be selected as a fault injection point, and fault injection may be performed on a feedback potentiometer. And finally, carrying out fault simulation and signal acquisition. The data that the signal was gathered can be various control command and the status signal of steering wheel control system, and it is the electric steering wheel key parameter time sequence data of this application, for example include: control instruction, unified clock, displacement signal, feedback angle. The selected key parameter time sequence data form historical time sequence data of the parameter to be predicted. The feedback angle can effectively represent the health state of the steering engine system, and therefore the feedback angle can be selected as a subsequent predicted parameter.
Secondly, comprehensive preprocessing is carried out on fault data
Data obtained by the steering engine fault prediction data acquisition unit, such as feedback angle signals, are sent to a fault data preprocessing unit for comprehensive processing to obtain a training data set and a test data set, specifically referring to a comprehensive data preprocessing module shown in fig. 1, which includes:
step 1, performing sliding window cutting on the key parameter time sequence data, and constructing a sample data set;
the time sequence data of key parameters of any electric steering engine collected by the sensor is X, X ═ X1,x2,...xNAnd performing sliding window cutting on the X to generate a corresponding sample data set. When the window width is W and the step size is s, the number of samples generated by cutting is:
Figure BDA0003173094030000061
then the corresponding data set is generated as S1,S2,...SsnIs to { S }1,nor,S2,nor,...Ssn,norEvery sample S ini,norTaking data with the length of W as training data, and taking data with the length of W-W as prediction data corresponding to the training data.
Step 2, carrying out maximum and minimum value normalization processing on the training data set;
in order to improve the data expression capacity and accelerate the convergence rate of the training of the subsequent model, the training data set needs to be normalized, and the amplitude of the original parameter is mainly scaled through a maximum and minimum normalization method to complete the linear transformation of the data. For a single sample data Si={x1,x2,...xw}, by the formula:
Figure BDA0003173094030000062
realizing normalization processing to obtain normalized sample data set { S1,nor,S2,nor,...Ssn,nor}。
Step 3, constructing a training data set and a testing data set;
and selecting the first r% of data from all the data as a training data set, and using the rest data as a test data set for verifying the prediction performance of the model. Generally, r is generally 60 to 80, preferably 70.
Thirdly, feature extraction based on convolution neural network is carried out
And the training data set obtained after the comprehensive data preprocessing module processes is respectively sent to a convolutional neural network primary self-encoder and an expert knowledge-based artificial time domain feature extraction module to obtain a convolutional feature and an artificial feature time domain feature. The feature extraction step specifically comprises: constructing a primary self-coding model based on a convolutional neural network, as shown in FIG. 3; and pre-training of the convolutional once-through autoencoder model and convolutional feature extraction using convolutional encoders as shown in fig. 4.
Firstly, a primary self-coding model based on a Convolutional Neural Network (CNN) is constructed by utilizing a training data set, and the model is pre-trained by utilizing the training data set. Since the format of the input data is required to be three-dimensional data by the two-dimensional convolutional neural network, a training data set needs to be constructed. The training set is constructed to meet the input requirement of a two-dimensional one-time self-coding model, and the method is to use a training data set Strain={S1,nor,S2,nor,...Sn,norThe data format of (n) is converted into (sn, w, 1), where sn is the number of samples, w is the data length of each sample, and 1 is the number of channels. Inputting the constructed training sample data set into a Convolutional Neural Network (CNN) -based primary self-coding model shown in FIG. 3.
A Convolutional Neural Network (Convolutional Neural Network) is a multi-layer supervised learning Neural Network, and a Convolutional layer and a pool sampling layer of a hidden layer are core parts for realizing a Convolutional Neural Network feature extraction function. The CNN is a neural network specially used for processing data with a similar network structure, the original data is subjected to feature extraction by simulating a biological visual operation mechanism, different CNN layers have the characteristic of weight sharing, the complexity of the network is effectively reduced, the problem of overfitting caused by too small data quantity is avoided, and the complexity of data reconstruction during multi-dimensional data feature extraction is avoided. As shown in fig. 3, the deep convolutional neural network of the present invention includes a plurality of convolutional layers, a plurality of pooling layers, and a Flatten fully-connected layer.
And (3) rolling layers: the convolution process with nonlinear activation can be described as:
Figure BDA0003173094030000071
wherein the content of the first and second substances,
Figure BDA0003173094030000072
is the output of the nth convolution kernel in the r convolutional layer,
Figure BDA0003173094030000073
is the m-th output eigenvector in the (r-1) -th convolutional layer, representing the convolution operation,
Figure BDA0003173094030000074
respectively representing the weight and the offset of the nth convolution kernel in the nth convolution layer, and the ReLU represents a nonlinear activation function.
A pooling layer: the spatial dimension of the convolution features can be reduced by adding pooling layers, avoiding overfitting. The max pooling layer is the most common pooling layer, taking only the most important part of the input (highest value), and can be expressed as
Figure BDA0003173094030000075
Wherein
Figure BDA0003173094030000076
Is a characteristic obtained by the convolution layer,
Figure BDA0003173094030000077
is the output of the pooling layer and l represents the length of the pooling operating area.
Full connection layer: finally, the fully-connected layer is input for feature recognition by using the multi-layer stacked convolutional layer and pooled extracted features, and softmax regression is generally used on the top fully-connected layer. Define the output of the softmax function as
Figure BDA0003173094030000078
Where k represents the number of output-layer network nodes.
The convolutional layer extracts different characteristics of input data in a time domain by using a certain number of convolutional kernels, the size of a parameter matrix can be effectively reduced through the pooling layer, so that the number of parameters in the last connection layer is reduced, the calculation speed can be increased and model overfitting can be prevented by adding the pooling layer, and finally, the characteristic parameters in the high-dimensional hidden layer are mapped to original input data by using the full connection layer, so that the characteristic extraction capability of the model is trained.
Secondly, selecting proper iteration times and a loss function, inputting the constructed three-dimensional training data set into a feature extraction model, and repeatedly executing forward propagation and backward propagation iterative computation processes; in the process, model parameters of the convolution layer, the pooling layer and the full-connection layer are continuously adjusted to finish the pre-training of the model.
And thirdly, taking out two convolutional layers, two pooling layers and a full-connection layer of the pre-training model, keeping the weight parameters of the two convolutional layers and the two pooling layers, and constructing the weight parameters into a trained primary self-coding model of the deep convolutional neural network.
Finally, the training data set { S is subjected to a primary self-coding model based on the convolutional neural network which completes pre-training1,nor,S2,nor,...Ssn,norExtracting convolution characteristics to obtain a convolution characteristic set (F)1,CNN,F2,CNN,...,Fsn,CNN}。
Fourthly, carrying out artificial time domain feature extraction based on expert knowledge on the training data set
As shown in the right-hand diagram of fig. 4, for the cut-out training data set Strain={S1,nor,S2,nor,...Sn,norAnd extracting time domain features based on expert knowledge. Specifically comprises normalizingAnd (4) carrying out sliding window cutting on the data, and extracting different time domain data and normalization processing of time domain characteristics for each cut sample.
Performing sliding window cutting on the normalized sample data: window length w', step size 1, for sample Si={x1,x2,...xwW-w ' +1 samples can be cut, and the length of each sample is w ', namely { S '1,S′2,...S′w-w′+1}。
For each sample S'iRespectively extracting eight time domain characteristics of a maximum value, a standard deviation, a variance, a form factor, a root mean square, a pulse index, a margin factor and a peak factor: for Window data S'iThe extracted time domain feature is Fi={f1,f2,...,f8Thus for sample SiThe extracted artificial feature is { F1,F2,...Fw-w′+1And normalizing the artificial features by using a maximum and minimum value normalization method, wherein step 2 in comprehensive data preprocessing can be referred to specifically.
Fifthly, carrying out feature splicing on the artificial features extracted based on expert knowledge and the high-dimensional hidden layer features extracted based on the CNN feature extraction model
With continuing reference to fig. 1 and 4, after the CNN feature extraction module extracts the high-dimensional hidden layer features and the artificial feature extraction module extracts the artificial time domain features, feature concatenation is performed on the high-dimensional hidden layer features and the artificial time domain features. The characteristic matrix extracted by the CNN characteristic extraction model is MCNNHaving a shape and size of
Figure BDA0003173094030000081
Wherein n isfIs the number of convolution kernels, sfIs the convolution kernel step size, and f is the convolution kernel size. The artificial time domain feature matrix extracted by the artificial feature extraction module is MmanualThe shape and size of the material is (w-w' +1, 8). Respectively dividing two characteristic sizes MCNN,MmanualPerforming Flatten flattening, and splicing along the column direction to obtain fusion features with the sizes as follows:
Figure BDA0003173094030000091
for the training data set Strain={S1,nor,S2,nor,...Sn,norEvery sample S ini,norAnd performing CNN feature extraction and artificial feature extraction and performing feature fusion. Let the dimension of the fused feature be nmergeThe training data set may then be reorganized as (n, n)merge) And the two-dimensional fusion feature matrix is used as the input of a subsequent SAE coding model.
Sixthly, depth feature fusion based on stacking self-encoder (SAE) is carried out
Referring to fig. 5, fig. 5 is a schematic structural diagram of the secondary SAE-based encoder shown in fig. 1. In the secondary self-encoder, depth feature fusion based on the stacked self-encoder is performed, and specifically, the depth feature fusion is performed by constructing a secondary self-encoder and a decoder, training the secondary self-encoder and the decoder, and using the stacked secondary self-encoder.
Firstly, a stacked quadratic self-encoder and decoder model is constructed, the model structure is shown in fig. 5, the number of coding layers is the same as the number of decoding layers, and the model can have better quadratic coding capability on depth characteristics.
And secondly, pre-training a secondary self-coding model by using the two-dimensional fusion characteristic matrix obtained in the step (five), selecting a proper loss function and iteration times by using the two-dimensional fusion characteristic matrix as the input and output of the stacked secondary self-coding model, finishing the iterative computation process of forward propagation and backward propagation to continuously reconstruct the self input of the model, and finally extracting a coding layer from the stacked secondary self-coding model which is pre-trained as an available secondary self-coding model.
Finally, carrying out secondary self-coding on the depth fusion feature based on a secondary self-coder model obtained through pre-training so as to obtain a secondary coding feature set { F'1,F′2,..,F′sn}。
Seventhly, carrying out time sequence extrapolation prediction model training
FIG. 1A is a block diagram illustrating steps and methods for performing time series extrapolation prediction model training, which specifically includes constructing a time series extrapolation prediction model, training an extrapolation predictor model, and using an extrapolation prediction period for prediction.
Step 7.1: and constructing a time sequence extrapolation predictor by using a convolutional neural network primary self-encoder obtained in the process of extracting the features based on the convolutional neural network and a stacked secondary self-encoder model obtained in the process of secondary self-encoding based on SAE, wherein the extrapolation predictor fuses CNN features and artificial features and secondarily encodes depth features, and then establishes a mapping relation between the secondary encoding features and label data so as to complete extrapolation prediction.
Step 7.2: and comprehensively training a CNN convolution feature extractor and a time sequence extrapolation predictor. The time sequence extrapolation prediction model takes original input data as input, firstly carries out artificial feature extraction, carries out CNN convolution feature extraction on the original data by utilizing a pre-trained CNN feature extraction model, then carries out feature fusion on the CNN convolution feature and artificial time domain feature, and labels S of training datatrainyAnd outputting the time-series extrapolation prediction model to finish the training of the extrapolation predictor model.
And 7.3, predicting the existing data by using the trained time sequence extrapolation prediction model, for input data with the length of W, cutting a data segment with the length of 2W-W in the existing data, splicing the data segment with the length of 2W-W with the predicted data, using the spliced data segment as the input of a new round of prediction, and continuously iterating to know that the artificially preset predicted length L is reachedpAnd predicting to end.
And 7.4, sending the verification set data obtained by the comprehensive data preprocessing unit into a time sequence extrapolation prediction model, and combining corresponding prediction indexes to finish the prediction performance evaluation of the model.
[ feature extraction example based on feature fusion ]
One important work of the invention is to innovatively design a feature extraction method based on the fusion of artificial features and convolution features of a deep neural network, and the method directly influences the degradation trend prediction and health evaluation of a hydraulic actuation system of an extrapolation prediction model. Based on the method, feedback angle data collected by a measuring point of a flow injection point is selected for illustration by using a 'actuator cylinder inner leakage' fault of a steering engine system.
The structural model of the electro-hydraulic steering engine is shown in fig. 2, the fault prediction is set as an actuator cylinder internal leakage fault, and the data is feedback angle time domain data. After the feedback angle data is obtained, the data is preprocessed. In the example, the window length is 9000, the step length is 1, for the data of each window, the data of the first 6000 length is used as the input of the extrapolation prediction model based on the convolutional neural network sequence, and the data of the last 3000 length is used as the label data of the window, namely the prediction data. For all normalized feedback angle data, the first 70% of the data is selected as a training data set, and the remaining 30% of the data is selected as a verification data set for verifying the prediction performance of the model.
1. After the training data are obtained, feature extraction based on the convolutional neural network is carried out
And taking parameter characteristics of steering engine feedback angle data into consideration, and performing sample characteristic extraction on the normalized sample data by using a convolutional neural network. With continued reference to fig. 1, 3, and 4, the convolutional layers extract different characteristics of the input data in the time domain by using a certain number of convolutional kernels, the size of the parameter matrix can be effectively reduced by the pooling layers, so that the number of parameters in the last connection layer is reduced, the pooling layers can be added to accelerate the calculation speed and prevent model overfitting, two layers of convolutional layers are used to map the original data to the high-dimensional hidden space so as to learn the nonlinear characteristics of the data, then the flattening layers and the full connection layers are combined to remap the characteristics of the high-dimensional samples to the original input data so as to learn the key characteristics of the original samples, a module which maps the original data samples to the low-dimensional characteristic space is selected as an encoder of the model, and a module which extracts the screened characteristic reconstruction samples is selected as a decoder of the model. The model structure parameters selected by the present invention are shown in Table 1
TABLE 1 convolutional neural network-based Primary autoencoder model parameters
Figure BDA0003173094030000111
Selecting proper iteration times and loss functions, inputting the constructed three-dimensional training data set into a feature extraction model to repeatedly execute forward propagation and backward propagation iterative computation processes, continuously adjusting model parameters of a convolutional layer, a pooling layer and a full-connection layer to finish pre-training of the model, taking out two convolutional layers, two pooling layers and one full-connection layer of the pre-training model, retaining weight parameters of the two convolutional layers, the two pooling layers and the one full-connection layer, and constructing the model into a CNN feature extraction model.
2. Expert knowledge based time domain feature extraction for a segmented training data set
Specifically, the training data of each window is cut and the data of each sub-window is subjected to artificial feature extraction with the window length of 3000 and the step length of 3000, and the feature extraction result is shown in fig. 7A and 7B.
3. Performing feature splicing on artificial features extracted based on expert knowledge and high-dimensional hidden layer features extracted based on CNN feature extraction model
4. Performing depth feature fusion based on stacked self-encoders
The two-dimensional fusion characteristic matrix is used for pre-training a secondary self-encoder model, the two-dimensional fusion characteristic matrix is used as input and output of a stacked secondary self-encoder model, a proper loss function and iteration times are selected, a forward propagation iterative computation process and a backward propagation iterative computation process are completed, the model continuously reconstructs self input, and finally, an encoding layer in the stacked secondary self-encoder model which is pre-trained is extracted to serve as an available secondary self-encoder model.
5. Time series extrapolation prediction model training
The method comprises the steps of constructing a time sequence extrapolation predictor by using a convolutional neural network primary self-encoder and a stacked secondary self-encoder model obtained by pre-training, fusing CNN characteristics and artificial characteristics and secondarily encoding depth characteristics, establishing a mapping relation between the secondary encoding characteristics and label data to finish extrapolation prediction, taking original input data as an input by using the time sequence extrapolation prediction model, firstly performing artificial characteristic extraction, performing CNN extraction on the original data by using a pre-trained CNN extraction model, then performing characteristic fusion on the CNN characteristics and the artificial characteristics, and outputting a training data label as a model to finish model training. And predicting the existing data by using the trained prediction model, splicing the extrapolated data and the original data to be used as the input of a new prediction, continuously iterating to know that the artificially preset prediction length is reached, ending the prediction, and comparing the prediction result with the real label as shown in fig. 8 and 9A and 9B.
Although the present invention has been described with reference to the embodiments shown in the drawings, equivalent or alternative means may be used without departing from the scope of the claims. The components described and illustrated herein are merely examples of systems/devices and methods that may be used to implement embodiments of the present disclosure and may be substituted for other devices and components without departing from the scope of the claims.

Claims (9)

1. A method for predicting parameter degradation time series extrapolation of an electro-hydraulic steering engine based on a secondary self-coding fusion mechanism comprises the following steps:
acquiring fault prediction data of the electro-hydraulic steering engine;
comprehensively preprocessing the fault data to obtain a training data set and a test data set;
constructing a time-series extrapolation predictor, which is characterized in that:
the time sequence extrapolation predictor comprises a convolutional neural network primary self-encoder, an expert knowledge-based artificial time domain feature extractor and an SAE-based secondary self-encoder;
the time sequence extrapolation predictor fuses the training data set to obtain fusion characteristics, the secondary self-encoder performs secondary encoding on the fusion characteristics, and then a mapping relation is established between the secondary encoding characteristics and label data;
comprehensively training the convolutional neural network primary self-encoder and the time sequence extrapolation predictor to obtain a trained time sequence extrapolation predictor; and
and predicting the existing data by using the trained time sequence extrapolation prediction model.
2. The method for predicting the time-series extrapolation of the parameter degradation of the electro-hydraulic steering engine according to claim 1, wherein the time-series extrapolation prediction model takes an original training data set as input, firstly performs artificial feature extraction based on an artificial time-domain feature extractor, performs convolution feature extraction on the original training data set by using a pre-trained convolution neural network feature extraction model, then performs feature fusion on the convolution feature and the artificial time-domain feature, and labels S the training datatrainyAnd outputting the time-series extrapolation prediction model to finish the training of the extrapolation predictor model.
3. The method for predicting the time-series extrapolation of the parameter degradation of the electro-hydraulic steering engine according to claim 1, wherein when the existing data is predicted, the length of the predicted data is W-W for the input data with the length of W, a data segment with the length of 2W-W in the existing data is intercepted and spliced with the predicted data to serve as the input of a new round of prediction, and the fact that the predicted length L which is artificially preset is reached is known through continuous iterationpAnd predicting to end.
4. The method for predicting the time-series extrapolation of the parameter degradation of the electro-hydraulic steering engine according to claim 1, wherein verification set data obtained through comprehensive pretreatment are sent to the time-series extrapolation prediction model, and the prediction performance evaluation of the model is completed by combining corresponding prediction indexes.
5. The electro-hydraulic steering engine parameter degradation time series extrapolation prediction method according to claim 1, wherein the comprehensive preprocessing step comprises sliding window cutting on key parameter time series data, wherein the key parameter time series data is X, X ═ X1,x2,...xNAnd performing sliding window cutting on the X to generate a corresponding sample data set, wherein when the window width is W and the step length is s, the number of samples generated by cutting is as follows:
Figure FDA0003173094020000021
then the corresponding data set is generated as S1,S2,...SsnIs to { S }1,nor,S2,nor,...Ssn,norEvery sample S ini,norTaking data with the length of W as training data, and taking data with the length of W-W as prediction data corresponding to the training data.
6. The electro-hydraulic steering engine parameter degradation time series extrapolation prediction method of claim 1, wherein the construction of the convolutional neural network primary self-encoder comprises a prediction based on the training data set Strain={S1,nor,S2,nor,...Sn,norConverting the data format into a three-dimensional data format (sn, w, 1), inputting the constructed three-dimensional training data set into a one-time self-encoder to repeatedly execute a forward propagation iterative computation process and a backward propagation iterative computation process so as to continuously adjust model parameters of a convolution layer, a pooling layer and a full connection layer of the constructed one-time self-encoding model to finish the pre-training of the model, wherein { S }1,nor,S2,nor,...,Ssn,norAnd the symbol is a sample data set subjected to normalization processing, sn is the number of samples, w is the data length of each sample, and 1 is the number of channels.
7. The electro-hydraulic steering engine parameter degradation time-series extrapolation prediction method according to claim 6, wherein the primary self-coding model comprises a plurality of convolution layers, a plurality of pooling layers and a Flatten fully-connected layer, the fully-connected layer performs feature identification by using features extracted from the convolution layers and the pooling layers stacked in multiple layers, softmax regression is used on the fully-connected layer, and the output of the softmax function is
Figure FDA0003173094020000022
Where k represents the number of output-layer network nodes.
8. The electro-hydraulic steering engine parameter degradation time series extrapolation prediction method according to claim 1, characterized in that the secondary encoder and decoder are pre-trained with a two-dimensional fusion feature matrix in the secondary SAE-based self-encoder; and taking the two-dimensional fusion characteristic matrix as the input and output of the stacked secondary self-coding, selecting a proper loss function and iteration times, completing the iterative computation process of forward propagation and backward propagation, continuously reconstructing the self-input of the model, and finally extracting the coding layer from the stacked secondary self-coding model which is pre-trained as an available secondary self-coding model.
9. The electro-hydraulic steering engine parameter degradation time series extrapolation prediction method according to claim 9, wherein depth fusion features are secondarily self-encoded based on a secondary self-encoder model obtained through pre-training, so that a secondary encoding feature set { F'1,F′2,..,F′sn}。
CN202110824289.9A 2021-07-21 2021-07-21 Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism Active CN113536682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110824289.9A CN113536682B (en) 2021-07-21 2021-07-21 Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110824289.9A CN113536682B (en) 2021-07-21 2021-07-21 Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism

Publications (2)

Publication Number Publication Date
CN113536682A true CN113536682A (en) 2021-10-22
CN113536682B CN113536682B (en) 2024-01-23

Family

ID=78100684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110824289.9A Active CN113536682B (en) 2021-07-21 2021-07-21 Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism

Country Status (1)

Country Link
CN (1) CN113536682B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063601A (en) * 2021-11-12 2022-02-18 江苏核电有限公司 Equipment state diagnosis system and method based on artificial intelligence
CN114399066A (en) * 2022-01-15 2022-04-26 中国矿业大学(北京) Mechanical equipment predictability maintenance system and maintenance method based on weak supervision learning
CN116776228A (en) * 2023-08-17 2023-09-19 合肥工业大学 Power grid time sequence data decoupling self-supervision pre-training method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634139A (en) * 2018-12-10 2019-04-16 中国航天空气动力技术研究院 Hypersonic aircraft navigation and control system semi-matter simulating system and method
CN112257760A (en) * 2020-09-30 2021-01-22 北京航空航天大学 Method for detecting abnormal network communication behavior of host based on time sequence die body
US20210117603A1 (en) * 2019-10-18 2021-04-22 Taiwan Semiconductor Manufacturing Company Ltd. Layout context-based cell timing characterization
CN113035280A (en) * 2021-03-02 2021-06-25 四川大学 RBP binding site prediction algorithm based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634139A (en) * 2018-12-10 2019-04-16 中国航天空气动力技术研究院 Hypersonic aircraft navigation and control system semi-matter simulating system and method
US20210117603A1 (en) * 2019-10-18 2021-04-22 Taiwan Semiconductor Manufacturing Company Ltd. Layout context-based cell timing characterization
CN112257760A (en) * 2020-09-30 2021-01-22 北京航空航天大学 Method for detecting abnormal network communication behavior of host based on time sequence die body
CN113035280A (en) * 2021-03-02 2021-06-25 四川大学 RBP binding site prediction algorithm based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063601A (en) * 2021-11-12 2022-02-18 江苏核电有限公司 Equipment state diagnosis system and method based on artificial intelligence
CN114399066A (en) * 2022-01-15 2022-04-26 中国矿业大学(北京) Mechanical equipment predictability maintenance system and maintenance method based on weak supervision learning
CN116776228A (en) * 2023-08-17 2023-09-19 合肥工业大学 Power grid time sequence data decoupling self-supervision pre-training method and system
CN116776228B (en) * 2023-08-17 2023-10-20 合肥工业大学 Power grid time sequence data decoupling self-supervision pre-training method and system

Also Published As

Publication number Publication date
CN113536682B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN113536682B (en) Electric hydraulic steering engine parameter degradation time sequence extrapolation prediction method based on secondary self-coding fusion mechanism
CN113536681B (en) Electric steering engine health assessment method based on time sequence extrapolation prediction
CN112149316B (en) Aero-engine residual life prediction method based on improved CNN model
CN109376413B (en) Online neural network fault diagnosis method based on data driving
CN113536683B (en) Feature extraction method based on fusion of artificial features and convolution features of deep neural network
CN108363896B (en) Fault diagnosis method for hydraulic cylinder
CN111160620B (en) Short-term wind power prediction method based on end-to-end memory network
CN107609634A (en) A kind of convolutional neural networks training method based on the very fast study of enhancing
CN108805195A (en) A kind of motor group method for diagnosing faults based on two-value deep-neural-network
CN113988449A (en) Wind power prediction method based on Transformer model
CN115146700B (en) Runoff prediction method based on transform sequence-to-sequence model
CN115840893A (en) Multivariable time series prediction method and device
CN116680105A (en) Time sequence abnormality detection method based on neighborhood information fusion attention mechanism
CN113313198B (en) Cutter wear prediction method based on multi-scale convolution neural network
CN113804997B (en) Voltage sag source positioning method based on bidirectional WaveNet deep learning
CN113431925B (en) Fault prediction method of electro-hydraulic proportional valve
CN113836783A (en) Digital regression model modeling method for main beam temperature-induced deflection monitoring reference value of cable-stayed bridge
CN115905848A (en) Chemical process fault diagnosis method and system based on multi-model fusion
CN112232570A (en) Forward active total electric quantity prediction method and device and readable storage medium
CN106779135A (en) A kind of hybrid power ship bearing power Forecasting Methodology
CN113435235B (en) Equipment state representation extraction method based on recursive fusion encoder
CN114943368A (en) Sea surface wind speed prediction method based on Transformer
CN113221450A (en) Dead reckoning method and system for sparse and uneven time sequence data
CN116826727B (en) Ultra-short-term wind power prediction method and prediction system based on time sequence representation and multistage attention
Zhao Precision local anomaly positioning technology for large complex electromechanical systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant