CN118153459A - Solid rocket engine ignition process model correction method, device and equipment - Google Patents
Solid rocket engine ignition process model correction method, device and equipment Download PDFInfo
- Publication number
- CN118153459A CN118153459A CN202410569955.2A CN202410569955A CN118153459A CN 118153459 A CN118153459 A CN 118153459A CN 202410569955 A CN202410569955 A CN 202410569955A CN 118153459 A CN118153459 A CN 118153459A
- Authority
- CN
- China
- Prior art keywords
- model
- rocket engine
- solid rocket
- sequence
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 168
- 230000008569 process Effects 0.000 title claims abstract description 124
- 239000007787 solid Substances 0.000 title claims abstract description 111
- 238000012937 correction Methods 0.000 title claims abstract description 87
- 238000012549 training Methods 0.000 claims abstract description 102
- 238000012360 testing method Methods 0.000 claims abstract description 28
- 239000013598 vector Substances 0.000 claims description 45
- 230000006870 function Effects 0.000 claims description 40
- 230000015654 memory Effects 0.000 claims description 26
- 238000000605 extraction Methods 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 230000007246 mechanism Effects 0.000 claims description 11
- 230000002441 reversible effect Effects 0.000 claims description 11
- 230000002457 bidirectional effect Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 8
- 230000000452 restraining effect Effects 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 3
- 238000010304 firing Methods 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 8
- 238000011161 development Methods 0.000 abstract description 4
- 238000004088 simulation Methods 0.000 description 11
- 238000002485 combustion reaction Methods 0.000 description 10
- 230000001052 transient effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 239000000446 fuel Substances 0.000 description 6
- 210000002569 neuron Anatomy 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 239000000306 component Substances 0.000 description 3
- 239000012530 fluid Substances 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000007787 long-term memory Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 1
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a method, a device and equipment for correcting a model in the ignition process of a solid rocket engine, which are used for obtaining a model dataset by normalizing acquired pressure sequence data and dividing the model dataset into a training set and a testing set; then constructing a pre-training correction model of the solid rocket engine ignition process; training the pre-training correction model through a training set and a pre-constructed loss function to obtain a trained correction model of the ignition process of the solid rocket engine; and finally, inputting the test set into a solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. The solid rocket engine ignition process correction model provided by the invention has better applicability and generalization capability, can effectively improve the precision of the solid rocket engine ignition process correction model, improves the design efficiency of the solid rocket engine, accelerates development and reduces cost.
Description
Technical Field
The invention relates to the technical field of solid rocket engines, in particular to a method, a device and equipment for correcting a solid rocket engine ignition process model.
Background
The solid rocket engine ignition transient is a critical part of the solid rocket engine operation, and refers to the time interval from the start of ignition of fuel by an igniter to the gradual entry of the fuel combustion flow into the quasi-steady state flow. In this process, the success and failure of ignition and the quality are critical to the proper operation of the solid rocket engine. The ignition transients directly affect the performance, efficiency, reliability and safety of solid rocket engines. The key to the ignition transient is whether the igniter is effective to ignite the fuel and to stably propagate the combustion flow throughout the engine interior. Whether ignition is successful or not directly determines whether the solid rocket engine can be started smoothly and run normally. Once ignition fails or combustion is insufficient, the engine will not generate enough thrust, will not provide the required power, and will not even function properly.
Thus, factors affecting the performance, efficiency, reliability and safety of engine operation mainly include the degree of complete combustion of fuel during ignition, the combustion time, the stability of the combustion process, etc. If the ignition transient is unstable or the combustion is incomplete, the engine performance is reduced, the efficiency is low, and even unsafe phenomena are caused. In order to ensure the normal operation of the solid rocket engine, a series of measures need to be taken in the ignition transient process, such as optimizing the design of an igniter, improving the combustion quality of fuel, optimizing the structure of a combustion chamber and the like, so as to ensure good ignition effect and stable combustion process.
Disclosure of Invention
Based on the above, it is necessary to provide a method, a device and equipment for correcting a solid rocket engine ignition process model, which can adaptively learn and adjust model parameters by constructing a solid rocket engine ignition process simulation model, and perform model correction on the ignition process transient simulation of the solid rocket engine, so as to improve the design efficiency of the solid rocket engine, accelerate development and reduce cost.
A method for modifying a solid rocket engine ignition process model, the method comprising:
Acquiring pressure intensity sequence data, carrying out normalization processing on the pressure intensity sequence data to obtain a model data set, and dividing the model data set into a training set and a testing set;
Constructing a pre-training correction model of the ignition process of the solid rocket engine;
Training the pre-training correction model through the training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, guiding and restraining the generated prediction result process through a pre-constructed loss function;
And inputting the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.
In one embodiment, the pre-training correction model comprises a depth convolution module, a bi-directional LSTM module, an attention module, a regularization module and a full connection layer;
Extracting the depth convolution feature sequence of the input training set through the depth convolution module to obtain a depth convolution feature sequence, and activating the depth convolution feature sequence;
Respectively carrying out positive sequence input and negative sequence input on the activated depth convolution feature sequence through the bidirectional LSTM module, and carrying out splicing and fusion after feature extraction to obtain a new feature sequence;
calculating the new feature sequence through the attention module to obtain an attention vector;
Regularizing the attention vector through the regularization module;
and integrating the regularized attention vectors through the full connection layer to generate a final prediction output.
In one embodiment, the calculation formula by the depth convolution module is expressed as:
;
In the method, in the process of the invention, Representing a sequence of deep convolution features; /(I)Representing the weight; /(I)Representing the input sequence data; /(I)Representing the bias; wherein/>,/>。
In one embodiment, the depth convolution module comprises a 5-layer convolution structure.
In one embodiment, the calculation formula through the bidirectional LSTM module is expressed as:
;
In the method, in the process of the invention, Representing the hidden layer state; /(I)Representing a positive sequence input; /(I)Representing an inverted sequence input; /(I)And/>Respectively representing weight parameters of an inverted sequence hidden layer and an inverted sequence hidden layer; /(I)Indicating bias parameters of the hidden layer; /(I)Representing the feature sequence of the activation function.
In one embodiment, the calculation formula of the attention module is expressed as:
;
In the method, in the process of the invention, Representing an attention vector; /(I)Representing the hidden layer state; /(I)Weights representing input features; /(I)Representing the bias of the input features.
In one embodiment, the pre-constructed loss function includes mean absolute error, mean square error, root mean square error, determined coefficients, and mean absolute percentage error.
A solid rocket engine ignition process model correction device, the device comprising:
The data processing module is used for acquiring pressure intensity sequence data, carrying out normalization processing on the pressure intensity sequence data to obtain a model data set, and dividing the model data set into a training set and a testing set;
The model construction module is used for constructing a pre-training correction model of the solid rocket engine ignition process;
The model pre-training module is used for training the pre-training correction model through the training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, guiding and restraining the generated prediction result process through a pre-constructed loss function;
And the prediction result generation module is used for inputting the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the solid rocket engine ignition process model correction method of any one of the preceding claims when the computer program is executed.
According to the solid rocket engine ignition process model correction method, device and equipment, the obtained pressure sequence data are normalized to obtain a model data set, and the model data set is divided into a training set and a testing set; then constructing a pre-training correction model of the solid rocket engine ignition process; training the pre-training correction model through a training set and a pre-constructed loss function to obtain a trained correction model of the ignition process of the solid rocket engine; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, guiding and restraining the generated prediction result process through a pre-constructed loss function; and finally, inputting the test set into a solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. The method for constructing the solid rocket engine ignition process correction model learns modes and rules from a large amount of actual pressure sequence data, does not need to rely on an accurate physical model, and has better applicability and generalization capability; on the other hand, the constructed solid rocket engine ignition process correction model combines the positive sequence input and the negative sequence input through the deep convolution characteristic sequence extraction, so that the precision of the solid rocket engine ignition process correction model can be effectively improved, uncertain factors in the training process are processed through regularization, the precision of the model is further improved, the design efficiency of the solid rocket engine is improved, development is accelerated, and the cost is reduced; the output prediction result can more accurately predict the pressure change condition of the solid rocket engine at different time points, thereby providing support for the design of the solid rocket engine.
Drawings
FIG. 1 is a flow diagram of a method for modifying a solid rocket engine ignition process model in one embodiment;
FIG. 2 is a schematic block diagram of a solid rocket engine ignition process correction model in one embodiment;
FIG. 3 is a schematic diagram of a loss function in training of a solid rocket engine firing process correction model in one embodiment;
FIG. 4 is a graphical representation of predicted and original values of a solid rocket engine ignition process correction model in one embodiment;
FIG. 5 is a schematic structural diagram of a solid rocket engine ignition process model correction device in one embodiment;
FIG. 6 is a schematic diagram of the internal structure of a computer device in one embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As is well known, the ignition process of solid rocket engines is often affected by many uncertainties and environmental changes, such as fuel changes, temperature changes, etc. In the prior art, the fluid transient simulation is generally adopted to correct the model in the ignition process of the solid rocket engine. However, the problems of complex parameter modification and long simulation time consumption exist in the fluid transient simulation of the ignition process of the solid rocket engine. Thus, suitable simplification of the ignition transient of an engine has been proposed, such as a proposed two-dimensional unsteady computational model, however, such a model is suitable for describing axisymmetric solid rocket engine ignition transients.
In the technical process of realizing the scheme, the inventor finds that deep learning can learn a more accurate ignition model through training based on a large amount of data and corrects the existing model. Compared with the traditional physical model, the method can better capture nonlinear relations and complex dynamic behaviors. At present, a deep learning method is applied to the field of solid rocket engines, for example, the deep learning model is used for reconstructing the internal structure of the engine under double visual angles, and performing real-time defect detection on a grain, and the like. The deep learning method can adapt to different working conditions and environmental changes by self-adaptive learning and adjusting model parameters, therefore, the inventor proposes a deep learning-based model correction method for transient simulation of the ignition process of a solid rocket engine, a solid rocket engine ignition process correction model is constructed, a attention mechanism and a long-short-term memory network are fused in the model, a time-pressure curve of the solid rocket engine is predicted, training is performed by inputting sequence data obtained by transient simulation, errors of a predicted value and a simulation value are calculated, and finally a predicted curve, a simulation curve and an error value are automatically output and the trained deep learning model is saved. The output result can more accurately predict the pressure change condition of the solid rocket engine at different time points, thereby providing support for the design of the solid rocket engine. It should be noted that, for convenience of description and reference, the inventors named the constructed solid rocket engine ignition process correction model ADCBiLSTM model.
Embodiments of the present invention will be described in detail below with reference to the attached drawings in the drawings of the embodiments of the present invention.
In one embodiment, as shown in fig. 1, a method for correcting a solid rocket engine ignition process model is provided, which comprises the following steps:
Step 202, obtaining pressure sequence data, carrying out normalization processing on the pressure sequence data to obtain a model data set, and dividing the model data set into a training set and a testing set.
Specifically, time-pressure sequence data of different mass flow rates are obtained, and when training is performed, the sequence data are read, the data size is 691×9, and the title of a data set is time, pressure 0.3-pressure 1.0. The data is normalized by MinMaxScaler functions and then divided into training sets and Test sets, wherein the training sets comprise train_x (compression 0.3-0.5) and Trian _y (compression 0.6), and the Test sets are divided into test_x (compression 0.7-0.9) and test_y (compression 1.0). The convergence rate of the model can be improved and the accuracy can be improved through normalization.
Step 204, constructing a pre-training correction model of the solid rocket engine ignition process.
Specifically, the constructed pre-training correction model comprises a depth convolution module, a bidirectional LSTM module, an attention module, a regularization module and a full connection layer, wherein:
And extracting the depth convolution characteristic sequence of the input training set through a depth convolution module to obtain a depth convolution characteristic sequence, wherein the depth convolution characteristic sequence is activated through an activation layer.
And respectively carrying out positive sequence input and reverse sequence input on the activated deep convolution feature sequence through a bidirectional LSTM module, and carrying out splicing and fusion after feature extraction to obtain a new feature sequence.
And calculating the new feature sequence through an attention module to obtain an attention vector.
The attention vector is regularized by a regularization module.
And integrating the regularized attention vectors through the full connection layer to generate a final prediction output.
More specifically, the depth convolution module is generally a 5-layer convolution structure, and can be set to be more layers according to the situation, and the information is subjected to convolution calculation through the convolution layers, so that the characteristic representation of the input information is obtained. Thus, the convolution is a core component, has excellent feature extraction capability for data with local features, and establishes a deep convolution neural network layer with a 5-layer convolution structure for extracting featuresThe individual sequence data is denoted/>Let the convolution kernel be,/>Expressed as the number of layers where the convolution kernel is located,/>Index value representing convolution kernel at current layer number,/>Index value representing the convolution kernel of the previous layer, the forward propagation of the depth convolution module is used for obtaining the depth convolution characteristic sequence。
In one embodiment, the form of the matrix available to the depth convolution module may be expressed as:
;
In the method, in the process of the invention, Representing a sequence of deep convolution features; /(I)Representing the weight; /(I)Representing the input sequence data; /(I)Representing the bias; wherein/>,/>。
In order to effectively acquire more dimensional information, a nonlinear mapping is required to be introduced in the modeling process to enhance the language expression capability of the model in a more complex space, and the nonlinear mapping is called an activation function.
In one embodiment, the calculation formula of the activation layer is as follows:
;
Convolving the depth with a feature sequence Inputting the characteristic sequence into the activation function to obtain the characteristic sequence/>, of the activated activation function. It will be appreciated that the activation function may be used to solve the problem of gradient disappearance.
In one embodiment, the feature sequence of the function is activatedThe calculation formula of the hidden layer state h t of each stage is obtained by inputting the hidden layer state h t into a bidirectional LSTM module as follows:
;
In the method, in the process of the invention, Representing the hidden layer state; /(I)Representing a positive sequence input; /(I)Representing an inverted sequence input; /(I)And/>Respectively representing weight parameters of an inverted sequence hidden layer and an inverted sequence hidden layer; /(I)Indicating the bias parameters of the hidden layer.
It will be appreciated that there are three "gates" within the long and short term memory network (LSTM) to control the discarding and retention of historical information and current time information, namely a forget gate, an input gate and an output gate, respectively. The specific calculation process is that the input sequence data firstly passes through a forgetting gate, and the forgetting gate is input by means of the long-time memory of the last momentAnd short-term memory input/>The information needed to be forgotten is recorded as/>, which is obtained through calculationInformation to be retained is recorded as/>, which is obtained by carrying out input gate calculationAnd also keep the memory cell at the current time as/>Finally, calculating the output value of the output gate and marking as/>. The combination of the forgetting gate and the input gate can obtain a new and new long-term memory unit/>,/>Is transmitted to the tanh function by the output gate, and calculates the hidden layer output value by sigmoid。
The two-way long-short-term memory network structure (two-way LSTM) can be divided into a forward long-short-term memory network and a backward long-short-term memory network, the input sequences are respectively input into the two networks in a positive sequence and a reverse sequence for feature extraction, and finally, the new feature sequences are obtained by splicing.
In one embodiment, the calculation formula of the attention module is expressed as:
;
In the method, in the process of the invention, Representing an attention vector; /(I)Representing the hidden layer state; /(I)Weights representing input features; /(I)Representing the bias of the input features.
It will be appreciated that the attention mechanism allows the model to focus on different parts of the input data when predicting or generating output, it can directly obtain global and local connections, fewer parameters are set, and the model is less complex.
Output obtained by the bidirectional LSTM moduleInput to the attention module, calculate the attention vector/>, using the softmax function。
In one embodiment, the regularization module is mainly used for processing uncertainty factors in the training process, reducing the over-fitting problem in the neural network, and further improving the accuracy of ADCBiLSTM models. It does this by randomly discarding (masking) some of the neuron outputs during the training process. Regularization techniques will set the output of certain neurons to 0 with a certain probability during the forward propagation of each training sample, thus "dropping" it. The discarded neurons will not affect subsequent neurons in this forward propagation. This way of integration helps to reduce overfitting and improve the generalization ability of the model.
In one embodiment, the fully connected layer acts as a "classifier" throughout the neural network. The full connection layer integrates the attention vector features, so that the final seen feature of the neural network is a global feature, and the calculation result of the ADCBiLSTM model is unfolded into a one-dimensional vector to generate final prediction output, thereby facilitating result output.
Step 206, training the pre-training correction model through a training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, the generated prediction result process is guided and restrained through a pre-constructed loss function.
Specifically, the ADCBiLSTM model, when training, first sets initial parameters as follows:
in_channels: the dimension of the vector; out_channels: the number of channels produced by the convolution; kernel_size: the size of the convolution kernel; a stride: step length; packing: the number of layers filled; input size: inputting the size of the last dimension of the tensor; output_size: the size of the output tensor; hidden_size: hiding state and size of memory cell; num_layers: the number of layers of the LSTM stack; batch_first: whether the input and output tensors use the batch dimension as the first dimension; bidirectory: a bi-directional LSTM is set. LEARNING RATE: a learning rate; number epochs: the number of training wheels; dropout: the probability of discarding neurons.
Training is then started, the training set is input ADCBiLSTM into the model for forward propagation, and a loss function is calculated. Model training is then performed ADCBiLSTM using a back-propagation algorithm to optimize the parameters of the ADCBiLSTM model. The training process circulates the above-mentioned flow until the circulation times reach the circulation times that the model originally set up.
In the training process, guiding and restraining the generated prediction result process through a pre-constructed loss function, and in order to measure the model prediction precision from different angles, the pre-constructed loss function comprises an average absolute value error, a mean square error, a root mean square error, a determination coefficient and an average absolute percentage error, wherein:
The Mean Absolute Error (MAE) represents the mean of the absolute errors between the predicted and observed values, MAE is a linear fraction, and all individual differences are weighted equally on the mean. The mean absolute value error is expressed as:
;
In the method, in the process of the invention, Represents the/>Predicted value of individual samples,/>Represents the/>True values of the individual samples.
The Mean Square Error (MSE) is an indicator of the difference between the model's predicted and actual observed values, used to evaluate the fit of the model to a given datum. The mean square error is expressed as:
;
Root Mean Square Error (RMSE) is an indicator of the difference between the model's predicted and actual observed values, which is used to evaluate the fit of the model to a given datum. Root mean square error is expressed as:
;
Determining coefficients And the statistical index is used for evaluating the fitting goodness of the regression model. It represents the proportion of the variability of the dependent variable that can be interpreted by the model, i.e., the degree of fitting of the model to the data, expressed as:
;
In the method, in the process of the invention, Representing the sample mean.
Average absolute percent error (MAPE) is a statistical indicator that measures prediction accuracy, expressed as:
;
The accuracy of model prediction can be evaluated from different angles through the five pre-constructed loss functions, so that the prediction result generated by the model is more accurate.
And step 208, inputting the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.
Specifically, the trained solid rocket engine ignition process correction model is changed into a test mode through a model.eval () function, and a test set is input for testing.
The obtained prediction solution result is a prediction pressure curve, a prediction pressure curve and a simulation pressure curve are drawn through the prediction pressure curve, and a prediction error value is calculated.
According to the solid rocket engine ignition process model correction method, the acquired pressure sequence data are normalized to obtain a model data set, and the model data set is divided into a training set and a testing set; then constructing a pre-training correction model of the solid rocket engine ignition process; training the pre-training correction model through a training set and a pre-constructed loss function to obtain a trained correction model of the ignition process of the solid rocket engine; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, guiding and restraining the generated prediction result process through a pre-constructed loss function; and finally, inputting the test set into a solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. The method for constructing the solid rocket engine ignition process correction model learns modes and rules from a large amount of actual pressure sequence data, does not need to rely on an accurate physical model, and has better applicability and generalization capability; on the other hand, the constructed solid rocket engine ignition process correction model combines the positive sequence input and the negative sequence input through the deep convolution characteristic sequence extraction, so that the precision of the solid rocket engine ignition process correction model can be effectively improved, uncertain factors in the training process are treated through regularization, the fitting problem is reduced, the precision of the model is further improved, the design efficiency of the solid rocket engine is improved, development is accelerated, and the cost is reduced; the output prediction result can more accurately predict the pressure change condition of the solid rocket engine at different time points, thereby providing support for the design of the solid rocket engine. Compared with fluid simulation, the ADCBiLSTM model provided by the invention has the advantages that parameters are more convenient to modify, and grid division and other operations are not needed.
In one embodiment, to verify the validity of ADCBiLSTM model provided by the present invention, it is verified.
First, in conjunction with the built ADCBiLSTM model, the following settings were made:
The depth convolution module has a 5-layer structure, the dimension of the vector is set to be 1, the number of channels generated by output convolution is sequentially set to be 64, 32, 16, 8 and 4, the convolution kernel size of all layers is 3, the step size of convolution is 1, and the filling is 1. The sequence data is input into a bidirectional LSTM module after the characteristics are extracted, the input tensor size of the network is 4, the output tensor size is 1, the hidden layer is 1, and the stacking layer number is 1. And then entering an attention module, calculating an attention vector by the attention module, reducing the overfitting phenomenon in the training process by using a regularization module, setting the discarding rate to be 0.1, and finally entering a full-connection layer to output a training result. The training learning rate was set to 0.001 and the training round number was set to 200.
Then, after normalization processing is carried out on the pressure intensity sequence data, the pressure intensity sequence data is divided into a training set and a testing set; training the pre-training correction model through the training set and the pre-constructed loss function to obtain a trained correction model for the ignition process of the solid rocket engine. The trained optimizer selects Adam and the loss function selects MAE, as shown in fig. 3, as the plotted loss function image.
And finally, inputting the test set into a solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result, namely outputting a predicted pressure curve as shown in fig. 4.
Experiments were repeated 5 times and the error mean of the predicted results was calculated to reduce randomness as shown in table 1.
TABLE 1
The average of 5 prediction errors over the test samples may be mae=0.0063, mse=0.00009, rmse=0.0095, r 2 =0.9991, mape=3.05%. The method, the device and the equipment for correcting the solid rocket engine ignition process model have higher precision.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
In one embodiment, as shown in fig. 5, there is provided a solid rocket engine ignition process model correction device, comprising: a data processing module 402, a model building module 404, a model pre-training module 406, and a prediction result generation module 408, wherein:
The data processing module 402 is configured to obtain pressure sequence data, normalize the pressure sequence data, obtain a model dataset, and divide the model dataset into a training set and a testing set.
The model construction module 404 is configured to construct a pre-trained correction model of the solid rocket engine ignition process.
The model pre-training module 406 is configured to train the pre-training correction model through a training set and a pre-constructed loss function, so as to obtain a trained correction model of the ignition process of the solid rocket engine; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, the generated prediction result process is guided and restrained through a pre-constructed loss function.
The prediction result generating module 408 is configured to input a test set into the solid rocket engine ignition process correction model, so as to obtain a solid rocket engine ignition process prediction result.
For specific limitations on the solid rocket engine ignition process model correction apparatus, reference may be made to the above limitations on the solid rocket engine ignition process model correction method, and no further description is given here. All or part of each module in the solid rocket engine ignition process model correction device can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing solid rocket engine ignition process model correction data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor is used for realizing a solid rocket engine ignition process model correction method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of:
Step 202, obtaining pressure sequence data, carrying out normalization processing on the pressure sequence data to obtain a model data set, and dividing the model data set into a training set and a testing set.
Step 204, constructing a pre-training correction model of the solid rocket engine ignition process.
Step 206, training the pre-training correction model through a training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, the generated prediction result process is guided and restrained through a pre-constructed loss function.
And step 208, inputting the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present invention, which are described in more detail and are not to be construed as limiting the scope of the present invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of the invention should be assessed as that of the appended claims.
Claims (9)
1. A method for modifying a solid rocket engine ignition process model, the method comprising:
Acquiring pressure intensity sequence data, carrying out normalization processing on the pressure intensity sequence data to obtain a model data set, and dividing the model data set into a training set and a testing set;
Constructing a pre-training correction model of the ignition process of the solid rocket engine;
Training the pre-training correction model through the training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, guiding and restraining the generated prediction result process through a pre-constructed loss function;
And inputting the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.
2. The method for correcting the ignition process model of the solid rocket engine according to claim 1, wherein the pre-training correction model comprises a depth convolution module, a bidirectional LSTM module, an attention module, a regularization module and a full connection layer;
Extracting the depth convolution feature sequence of the input training set through the depth convolution module to obtain a depth convolution feature sequence, and activating the depth convolution feature sequence;
Respectively carrying out positive sequence input and negative sequence input on the activated depth convolution feature sequence through the bidirectional LSTM module, and carrying out splicing and fusion after feature extraction to obtain a new feature sequence;
calculating the new feature sequence through the attention module to obtain an attention vector;
Regularizing the attention vector through the regularization module;
and integrating the regularized attention vectors through the full connection layer to generate a final prediction output.
3. The method for correcting the ignition process model of the solid rocket engine according to claim 2, wherein the calculation formula of the depth convolution module is expressed as:
;
In the method, in the process of the invention, Representing a sequence of deep convolution features; /(I)Representing the weight; /(I)Representing the input sequence data; /(I)Representing the bias; wherein/>,/>。
4. A solid rocket engine firing process model correction method according to claim 3, wherein said deep convolution module comprises a 5-layer convolution structure.
5. The solid rocket engine ignition process model correction method according to claim 2, wherein the calculation formula by the bidirectional LSTM module is expressed as:
;
In the method, in the process of the invention, Representing the hidden layer state; /(I)Representing a positive sequence input; /(I)Representing an inverted sequence input; /(I)And/>Respectively representing weight parameters of an inverted sequence hidden layer and an inverted sequence hidden layer; /(I)Indicating bias parameters of the hidden layer; /(I)Representing the feature sequence of the activation function.
6. The method for correcting the ignition process model of the solid rocket engine according to claim 2, wherein the calculation formula of the attention module is expressed as:
;
In the method, in the process of the invention, Representing an attention vector; /(I)Representing the hidden layer state; /(I)Weights representing input features; /(I)Representing the bias of the input features.
7. A method of modifying a solid rocket engine firing process model according to claim 1 or claim 2 wherein the pre-constructed loss function comprises mean absolute error, mean square error, root mean square error, deterministic coefficient and mean absolute percentage error.
8. A solid rocket engine ignition process model correction device, the device comprising:
The data processing module is used for acquiring pressure intensity sequence data, carrying out normalization processing on the pressure intensity sequence data to obtain a model data set, and dividing the model data set into a training set and a testing set;
The model construction module is used for constructing a pre-training correction model of the solid rocket engine ignition process;
The model pre-training module is used for training the pre-training correction model through the training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; the training set is subjected to deep convolution feature sequence extraction by the pre-training correction model, and a deep convolution feature sequence is obtained; after the deep convolution feature sequence is activated, feature extraction is carried out through positive sequence input and reverse sequence input, and splicing and fusion are carried out, so that a new feature sequence is obtained; generating an attention vector from the new feature sequence through an attention mechanism, and regularizing the attention vector; integrating the regularized attention vectors to generate a final predicted output; meanwhile, guiding and restraining the generated prediction result process through a pre-constructed loss function;
And the prediction result generation module is used for inputting the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410569955.2A CN118153459B (en) | 2024-05-09 | 2024-05-09 | Solid rocket engine ignition process model correction method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410569955.2A CN118153459B (en) | 2024-05-09 | 2024-05-09 | Solid rocket engine ignition process model correction method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118153459A true CN118153459A (en) | 2024-06-07 |
CN118153459B CN118153459B (en) | 2024-08-06 |
Family
ID=91285455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410569955.2A Active CN118153459B (en) | 2024-05-09 | 2024-05-09 | Solid rocket engine ignition process model correction method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118153459B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116481630A (en) * | 2023-04-03 | 2023-07-25 | 北京科技大学 | Jet transient sound field reconstruction method based on equivalent source and convolution network |
US20240054329A1 (en) * | 2022-08-15 | 2024-02-15 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for a bayesian spatiotemporal graph transformer network for multi-aircraft trajectory prediction |
CN117763933A (en) * | 2023-02-28 | 2024-03-26 | 沈阳航空航天大学 | Solid rocket engine time sequence parameter prediction method and prediction system based on deep learning |
WO2024087129A1 (en) * | 2022-10-24 | 2024-05-02 | 大连理工大学 | Generative adversarial multi-head attention neural network self-learning method for aero-engine data reconstruction |
-
2024
- 2024-05-09 CN CN202410569955.2A patent/CN118153459B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20240054329A1 (en) * | 2022-08-15 | 2024-02-15 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for a bayesian spatiotemporal graph transformer network for multi-aircraft trajectory prediction |
WO2024087129A1 (en) * | 2022-10-24 | 2024-05-02 | 大连理工大学 | Generative adversarial multi-head attention neural network self-learning method for aero-engine data reconstruction |
CN117763933A (en) * | 2023-02-28 | 2024-03-26 | 沈阳航空航天大学 | Solid rocket engine time sequence parameter prediction method and prediction system based on deep learning |
CN116481630A (en) * | 2023-04-03 | 2023-07-25 | 北京科技大学 | Jet transient sound field reconstruction method based on equivalent source and convolution network |
Non-Patent Citations (4)
Title |
---|
(保)伊凡·瓦西列夫作: "《智能系统与技术丛书Python深度学习模型方法与实现》", 30 September 2021, 机械工业出版社, pages: 264 * |
AI信仰者: ""cnn+lstm+attention对时序数据进行预测"", 《HTTPS://WWW.JIANSHU.COM/P/3AABEB7A128B》, 22 February 2023 (2023-02-22), pages 1 - 4 * |
聂侥;吴建军;: "基于误差预测修正的液体火箭发动机故障预测方法研究", 推进技术, no. 08, 28 July 2016 (2016-07-28), pages 172 - 181 * |
陈宇,雷春著: "《人工智能在教育治理中的应用与发展》", 31 December 2021, 华中科技大学出版社, pages: 117 * |
Also Published As
Publication number | Publication date |
---|---|
CN118153459B (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10521729B2 (en) | Neural architecture search for convolutional neural networks | |
CN107688850B (en) | Deep neural network compression method | |
CN110119854B (en) | Voltage stabilizer water level prediction method based on cost-sensitive LSTM (least squares) cyclic neural network | |
JP7250126B2 (en) | Computer architecture for artificial image generation using autoencoders | |
CN113902926A (en) | General image target detection method and device based on self-attention mechanism | |
CN114548591B (en) | Sequential data prediction method and system based on mixed deep learning model and Stacking | |
WO2021145945A1 (en) | Generative adversarial network-based target identification | |
CN113221645B (en) | Target model training method, face image generating method and related device | |
US20210357729A1 (en) | System and method for explaining the behavior of neural networks | |
EP3888008A1 (en) | Computer architecture for artificial image generation | |
CN114707712A (en) | Method for predicting requirement of generator set spare parts | |
CN115204463A (en) | Residual service life uncertainty prediction method based on multi-attention machine mechanism | |
US11003909B2 (en) | Neural network trained by homographic augmentation | |
CN112766212B (en) | Hyperspectral remote sensing image water body inherent attribute inversion method, device and equipment | |
CN118153459B (en) | Solid rocket engine ignition process model correction method, device and equipment | |
CN116579408A (en) | Model pruning method and system based on redundancy of model structure | |
US20230092949A1 (en) | System and method for estimating model metrics without labels | |
CN114638421A (en) | Method for predicting requirement of generator set spare parts | |
CN115688229B (en) | Method for creating most unfavorable defect mode of reticulated shell structure based on deep learning | |
CN112801058B (en) | UML picture identification method and system | |
CN118313277B (en) | Unmanned aerial vehicle interference link planning method, device and equipment driven by dynamic data | |
CN117933104B (en) | Solid attitude and orbit control engine gas regulating valve pressure correction method | |
CN117236900B (en) | Individual tax data processing method and system based on flow automation | |
US20230162028A1 (en) | Extracting and transferring feature representations between models | |
Yu et al. | A New Network Pruning Framework Based on Rewind |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |