CN118153459A - Solid rocket engine ignition process model correction method, device and equipment - Google Patents

Solid rocket engine ignition process model correction method, device and equipment Download PDF

Info

Publication number
CN118153459A
CN118153459A CN202410569955.2A CN202410569955A CN118153459A CN 118153459 A CN118153459 A CN 118153459A CN 202410569955 A CN202410569955 A CN 202410569955A CN 118153459 A CN118153459 A CN 118153459A
Authority
CN
China
Prior art keywords
model
solid rocket
rocket engine
ignition process
engine ignition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410569955.2A
Other languages
Chinese (zh)
Other versions
CN118153459B (en
Inventor
王东辉
杨慧欣
高经纬
武泽平
张为华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202410569955.2A priority Critical patent/CN118153459B/en
Publication of CN118153459A publication Critical patent/CN118153459A/en
Application granted granted Critical
Publication of CN118153459B publication Critical patent/CN118153459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a method, a device and equipment for correcting a model in the ignition process of a solid rocket engine, which are used for obtaining a model dataset by normalizing acquired pressure sequence data and dividing the model dataset into a training set and a testing set; then constructing a pre-training correction model of the solid rocket engine ignition process; training the pre-training correction model through a training set and a pre-constructed loss function to obtain a trained correction model of the ignition process of the solid rocket engine; and finally, inputting the test set into a solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. The solid rocket engine ignition process correction model provided by the invention has better applicability and generalization capability, can effectively improve the precision of the solid rocket engine ignition process correction model, improves the design efficiency of the solid rocket engine, accelerates development and reduces cost.

Description

固体火箭发动机点火过程模型修正方法、装置及设备Solid rocket engine ignition process model correction method, device and equipment

技术领域Technical Field

本发明涉及固体火箭发动机技术领域,特别是涉及一种固体火箭发动机点火过程模型修正方法、装置及设备。The present invention relates to the technical field of solid rocket engines, and in particular to a method, device and equipment for correcting a solid rocket engine ignition process model.

背景技术Background technique

固体火箭发动机点火瞬态过程是固体火箭发动机工作中至关重要的一部分,它指的是从点火器点燃燃料开始,到燃料燃烧流动逐渐进入准稳态流动的时间间隔。在这个过程中,点火的成败和质量对于固体火箭发动机的正常工作至关重要。点火瞬态过程直接影响固体火箭发动机的性能、效率、可靠性和安全性。点火瞬态过程的关键在于点火器能否有效点燃燃料,并将燃烧流动稳定传播至整个发动机内部。点火的成功与否将直接决定固体火箭发动机是否能够顺利启动和正常运行。一旦点火失败或燃烧不足,发动机将无法产生足够的推力,无法提供所需功率,甚至无法正常工作。The ignition transient process of a solid rocket engine is a crucial part of the work of a solid rocket engine. It refers to the time interval from the ignition of the fuel by the igniter to the gradual entry of the fuel combustion flow into a quasi-steady-state flow. In this process, the success or failure and quality of ignition are crucial to the normal operation of the solid rocket engine. The ignition transient process directly affects the performance, efficiency, reliability and safety of the solid rocket engine. The key to the ignition transient process is whether the igniter can effectively ignite the fuel and stably propagate the combustion flow throughout the engine. The success or failure of ignition will directly determine whether the solid rocket engine can start smoothly and operate normally. Once the ignition fails or the combustion is insufficient, the engine will not be able to generate enough thrust, provide the required power, or even work properly.

因此,影响发动机工作的性能、效率、可靠性和安全性的因素主要包括点燃过程中燃料的完全燃烧程度、燃烧时间、燃烧过程的稳定性等。如果点火瞬态过程不稳定或燃烧不完全,将会导致发动机性能下降,效率低下,甚至引发不安全的现象。为确保固体火箭发动机的正常工作,需要在点火瞬态过程中采取一系列措施,如优化点火器设计、提高燃料燃烧质量、优化燃烧室结构等,以保证良好的点火效果和稳定的燃烧过程。Therefore, the factors that affect the performance, efficiency, reliability and safety of the engine mainly include the degree of complete combustion of the fuel during the ignition process, the combustion time, the stability of the combustion process, etc. If the ignition transient process is unstable or the combustion is incomplete, it will lead to a decrease in engine performance, low efficiency, and even cause unsafe phenomena. In order to ensure the normal operation of the solid rocket engine, a series of measures need to be taken during the ignition transient process, such as optimizing the igniter design, improving the fuel combustion quality, optimizing the combustion chamber structure, etc., to ensure a good ignition effect and a stable combustion process.

发明内容Summary of the invention

基于此,有必要针对上述技术问题,提供一种固体火箭发动机点火过程模型修正方法、装置及设备,通过构建固体火箭发动机点火过程仿真模型,能够自适应学习和调整模型参数,对固体火箭发动机的点火过程瞬态仿真进行模型修正,以提高固体火箭发动机的设计效率,加快开发进行,减少成本。Based on this, it is necessary to provide a solid rocket engine ignition process model correction method, device and equipment to address the above-mentioned technical problems. By constructing a solid rocket engine ignition process simulation model, it can adaptively learn and adjust model parameters, and perform model correction on the transient simulation of the solid rocket engine ignition process, so as to improve the design efficiency of the solid rocket engine, speed up development and reduce costs.

一种固体火箭发动机点火过程模型修正方法,所述方法包括:A solid rocket engine ignition process model correction method, the method comprising:

获取压强序列数据,对所述压强序列数据进行归一化处理后,得到模型数据集,将所述模型数据集划分为训练集和测试集;Acquire pressure sequence data, perform normalization processing on the pressure sequence data to obtain a model data set, and divide the model data set into a training set and a test set;

构建固体火箭发动机点火过程的预训练修正模型;Construct a pre-trained correction model for the solid rocket motor ignition process;

通过所述训练集及预先构建的损失函数对所述预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列;所述深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将所述新的特征序列通过注意力机制生成注意力向量,并对所述注意力向量进行正则化;将正则化的所述注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束;The pre-trained correction model is trained by the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through positive order input and reverse order input respectively and splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through an attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate a final prediction output; at the same time, the generated prediction result process is guided and constrained by the pre-constructed loss function;

将所述测试集输入所述固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。The test set is input into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.

在其中一个实施例中,所述预训练修正模型包括深度卷积模块、双向LSTM模块、注意力模块、正则化模块及全连接层;In one embodiment, the pre-trained correction model includes a deep convolution module, a bidirectional LSTM module, an attention module, a regularization module and a fully connected layer;

通过所述深度卷积模块对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列,并对所述深度卷积特征序列激活;Performing deep convolution feature sequence extraction on the input training set through the deep convolution module to obtain a deep convolution feature sequence, and activating the deep convolution feature sequence;

通过所述双向LSTM模块对激活后的所述深度卷积特征序列分别进行正序输入和逆序输入,并在特征提取后进行拼接融合,得到新的特征序列;The activated deep convolution feature sequence is input in forward order and reverse order respectively through the bidirectional LSTM module, and concatenated and fused after feature extraction to obtain a new feature sequence;

通过所述注意力模块对所述新的特征序列进行计算,得到注意力向量;The new feature sequence is calculated by the attention module to obtain an attention vector;

通过所述正则化模块对所述注意力向量进行正则化;Regularizing the attention vector by the regularization module;

通过所述全连接层对正则化后的所述注意力向量进行整合,以生成最终的预测输出。The regularized attention vector is integrated through the fully connected layer to generate the final prediction output.

在其中一个实施例中,通过所述深度卷积模块的计算公式表示为:In one embodiment, the calculation formula of the depth convolution module is expressed as:

;

式中,表示深度卷积特征序列;/>表示权重;/>表示输入的序列数据;/>表示偏置;其中,/>,/>In the formula, Represents a deep convolutional feature sequence; /> Indicates weight; /> Represents the input sequence data; /> represents the bias; where, /> ,/> .

在其中一个实施例中,所述深度卷积模块包括5层卷积结构。In one of the embodiments, the deep convolution module includes a 5-layer convolution structure.

在其中一个实施例中,通过所述双向LSTM模块的计算公式表示为:In one embodiment, the calculation formula of the bidirectional LSTM module is expressed as:

;

式中,表示隐藏层状态;/>表示正序输入;/>表示逆序输入;/>和/>分别表示逆序隐藏层和逆序隐藏层的权重参数;/>表示隐藏层的偏置参数;/>表示激活函数的特征序列。In the formula, Represents the hidden layer state; /> Indicates positive sequence input; /> Indicates reverse order input; /> and/> Respectively represent the weight parameters of the reverse hidden layer and the reverse hidden layer; /> Represents the bias parameter of the hidden layer; /> A sequence of features representing the activation function.

在其中一个实施例中,所述注意力模块的计算公式表示为:In one embodiment, the calculation formula of the attention module is expressed as:

;

式中,表示注意力向量;/>表示隐藏层状态;/>表示输入特征的权重;/>表示输入特征的偏置。In the formula, Represents the attention vector; /> Represents the hidden layer state; /> Represents the weight of the input feature; /> Represents the bias of the input feature.

在其中一个实施例中,预先构建的损失函数包括平均绝对误差、均方误差、均方根误差、确定系数及平均绝对百分比误差。In one embodiment, the pre-constructed loss functions include mean absolute error, mean square error, root mean square error, coefficient of determination, and mean absolute percentage error.

一种固体火箭发动机点火过程模型修正装置,所述装置包括:A solid rocket engine ignition process model correction device, the device comprising:

数据处理模块,用于获取压强序列数据,对所述压强序列数据进行归一化处理后,得到模型数据集,将所述模型数据集划分为训练集和测试集;A data processing module, used for acquiring pressure sequence data, performing normalization processing on the pressure sequence data to obtain a model data set, and dividing the model data set into a training set and a test set;

模型构建模块,用于构建固体火箭发动机点火过程的预训练修正模型;A model building module, which is used to build a pre-trained correction model of the solid rocket motor ignition process;

模型预训练模块,用于通过所述训练集及预先构建的损失函数对所述预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列;所述深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将所述新的特征序列通过注意力机制生成注意力向量,并对所述注意力向量进行正则化;将正则化的所述注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束;A model pre-training module is used to train the pre-trained correction model through the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through positive order input and reverse order input respectively and splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through an attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate a final prediction output; at the same time, the generated prediction result process is guided and constrained by the pre-constructed loss function;

预测结果生成模块,用于将所述测试集输入所述固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。The prediction result generation module is used to input the test set into the solid rocket engine ignition process correction model to obtain the solid rocket engine ignition process prediction result.

一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述任一项所述固体火箭发动机点火过程模型修正方法的步骤。A computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of any one of the above-mentioned methods for correcting the solid rocket engine ignition process model when executing the computer program.

上述固体火箭发动机点火过程模型修正方法、装置及设备,通过对获取的压强序列数据进行归一化处理,得到模型数据集,将模型数据集划分为训练集和测试集;然后构建固体火箭发动机点火过程的预训练修正模型;通过训练集及预先构建的损失函数对预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的训练集进行深度卷积特征序列提取,得到深度卷积特征序列;深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将新的特征序列通过注意力机制生成注意力向量,并对注意力向量进行正则化;将正则化的注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束;最后,将测试集输入固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。本发明构建固体火箭发动机点火过程修正模型从大量的实际压强序列数据中学习到模式和规律,无需依赖精确的物理模型,具有更好的适用性和泛化能力;另一方面,构建的固体火箭发动机点火过程修正模型通过深度卷积特征序列提取结合正序输入和逆序输入,可以有效提高固体火箭发动机点火过程修正模型的精度,并且,通过正则化来处理训练过程中的不确定因素,进一步提高模型的精度,提高固体火箭发动机的设计效率,加快开发进行,减少成本;输出的预测结果,可更准确地预测固体火箭发动机在不同时间点的压力变化情况,从而为固体火箭发动机设计提供支撑。The above-mentioned solid rocket engine ignition process model correction method, device and equipment obtain a model data set by normalizing the acquired pressure sequence data, and divide the model data set into a training set and a test set; then construct a pre-trained correction model of the solid rocket engine ignition process; train the pre-trained correction model through the training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein, the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through positive order input and reverse order input respectively and splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through an attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate a final prediction output; at the same time, the generated prediction result process is guided and constrained by a pre-constructed loss function; finally, the test set is input into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. The present invention constructs a solid rocket engine ignition process correction model to learn patterns and laws from a large amount of actual pressure sequence data, without relying on precise physical models, and has better applicability and generalization ability; on the other hand, the constructed solid rocket engine ignition process correction model can effectively improve the accuracy of the solid rocket engine ignition process correction model through deep convolution feature sequence extraction combined with positive sequence input and reverse sequence input, and through regularization to deal with uncertain factors in the training process, the accuracy of the model is further improved, the design efficiency of the solid rocket engine is improved, the development is accelerated, and the cost is reduced; the output prediction results can more accurately predict the pressure changes of the solid rocket engine at different time points, thereby providing support for the design of solid rocket engines.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为一个实施例中固体火箭发动机点火过程模型修正方法的流程示意图;FIG1 is a schematic flow chart of a method for correcting a solid rocket engine ignition process model in one embodiment;

图2为一个实施例中固体火箭发动机点火过程修正模型的模块结构示意图;FIG2 is a schematic diagram of the module structure of a solid rocket engine ignition process correction model in one embodiment;

图3为一个实施例中固体火箭发动机点火过程修正模型的训练中损失函数示意图;FIG3 is a schematic diagram of a loss function in the training of a correction model for the ignition process of a solid rocket engine in one embodiment;

图4为一个实施例中固体火箭发动机点火过程修正模型的预测值和和原始值的曲线示意图;FIG4 is a schematic diagram of a curve showing predicted values and original values of a correction model for a solid rocket engine ignition process in one embodiment;

图5为一个实施例中固体火箭发动机点火过程模型修正装置结构框架示意图;FIG5 is a schematic diagram of the structural framework of a solid rocket engine ignition process model correction device in one embodiment;

图6为一个实施例中计算机设备的内部结构示意图。FIG. 6 is a schematic diagram of the internal structure of a computer device in one embodiment.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the purpose, technical solution and advantages of the present invention more clearly understood, the present invention is further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention and are not intended to limit the present invention.

众所周知,固体火箭发动机的点火过程往往受到许多不确定性和环境变化的影响,如燃料变化、温度变化等。而现有技术中,一般对固体火箭发动机点火过程采用流体瞬态仿真进行模型的修正。然而,针对固体火箭发动机点火过程流体瞬态仿真存在修改参数复杂、仿真耗时长的问题。因此, 有人提出对发动机的点火瞬态过程进行适当的简化,例如提出的二维非定常计算模型,然而,这种模型适合于描述轴对称固体火箭发动机点火瞬态过程。As is known to all, the ignition process of solid rocket engines is often affected by many uncertainties and environmental changes, such as fuel changes, temperature changes, etc. In the prior art, the ignition process of solid rocket engines is generally corrected by fluid transient simulation. However, there are problems such as complex parameter modification and long simulation time in the fluid transient simulation of the ignition process of solid rocket engines. Therefore, some people have proposed to simplify the ignition transient process of the engine appropriately, such as the proposed two-dimensional unsteady calculation model. However, this model is suitable for describing the ignition transient process of axisymmetric solid rocket engines.

发明人在实现本方案技术过程中,发现深度学习可以通过基于大量数据的训练,学习到更准确的点火模型,并修正现有的模型。相比传统的物理模型,它能够更好地捕捉非线性关系和复杂的动力学行为。目前深度学习方法在固体火箭发动机领域已有应用,例如基于深度学习模型进行双视角下发动机内结构重建,进行药柱进行实时缺陷检测等。深度学习方法可以通过自适应学习和调整模型参数,适应不同的工况和环境变化,因此,发明人提出了一种基于深度学习对固体火箭发动机的点火过程瞬态仿真进行模型修正,通过构建固体火箭发动机点火过程修正模型,并在模型中融合注意力机制和长短期记忆网络,针对固体火箭发动机的时间-压力曲线进行预测,通过输入瞬态仿真得到的序列数据进行训练,计算预测值与仿真值的误差,最后自动输出预测曲线和仿真曲线和误差值并保存训练完成的深度学习模型。输出的结果可以更准确地预测固体火箭发动机在不同时间点的压力变化情况,从而为固体火箭发动机设计提供支撑。值得说明的是,为了便于描述和称呼,发明人将构建的固体火箭发动机点火过程修正模型命名为ADCBiLSTM模型。In the process of implementing the technology of this solution, the inventor found that deep learning can learn a more accurate ignition model and correct the existing model through training based on a large amount of data. Compared with traditional physical models, it can better capture nonlinear relationships and complex dynamic behaviors. At present, deep learning methods have been applied in the field of solid rocket engines, such as reconstructing the internal structure of the engine under dual perspectives based on deep learning models, and performing real-time defect detection on the propellant column. The deep learning method can adapt to different working conditions and environmental changes through adaptive learning and adjustment of model parameters. Therefore, the inventor proposed a model correction for the transient simulation of the ignition process of the solid rocket engine based on deep learning. By constructing a correction model for the ignition process of the solid rocket engine, and integrating the attention mechanism and the long short-term memory network in the model, the time-pressure curve of the solid rocket engine is predicted, and the sequence data obtained by the transient simulation is input for training, and the error between the predicted value and the simulated value is calculated. Finally, the predicted curve, the simulation curve and the error value are automatically output and the trained deep learning model is saved. The output result can more accurately predict the pressure change of the solid rocket engine at different time points, thereby providing support for the design of the solid rocket engine. It is worth noting that, for the convenience of description and naming, the inventor named the constructed solid rocket engine ignition process correction model as ADCBiLSTM model.

下面将结合本发明实施例图中的附图,对本发明实施方式进行详细说明。The following will describe the implementation of the present invention in detail with reference to the accompanying drawings in the embodiment diagram of the present invention.

在一个实施例中,如图1所示,提供了一种固体火箭发动机点火过程模型修正方法,包括以下步骤:In one embodiment, as shown in FIG1 , a solid rocket engine ignition process model correction method is provided, comprising the following steps:

步骤202,获取压强序列数据,对压强序列数据进行归一化处理后,得到模型数据集,将模型数据集划分为训练集和测试集。Step 202, obtaining pressure sequence data, normalizing the pressure sequence data to obtain a model data set, and dividing the model data set into a training set and a test set.

具体地,获取的为不同质量流率的时间-压力序列数据,进行训练时,读取序列数据,其数据大小为691×9,数据集的标题为time、pressure0.3~pressure1.0。通过MinMaxScaler函数对数据进行归一化操作,然后将数据划分为训练集和测试集,其中训练集包括Train_X(pressure0.3~0.5)和Trian_Y(pressure0.6),测试集划分为Test_X(pressure0.7~0.9)和Test_Y(pressure1.0)。通过归一化可以提升模型的收敛速度并且提高精度。Specifically, the time-pressure sequence data of different mass flow rates are obtained. When training, the sequence data is read, and its data size is 691×9. The title of the data set is time, pressure0.3~pressure1.0. The data is normalized by the MinMaxScaler function, and then the data is divided into a training set and a test set. The training set includes Train_X (pressure0.3~0.5) and Trian_Y (pressure0.6), and the test set is divided into Test_X (pressure0.7~0.9) and Test_Y (pressure1.0). Normalization can improve the convergence speed of the model and improve the accuracy.

步骤204,构建固体火箭发动机点火过程的预训练修正模型。Step 204, construct a pre-trained correction model for the solid rocket motor ignition process.

具体地,构建的预训练修正模型包括深度卷积模块、双向LSTM模块、注意力模块、正则化模块及全连接层,其中:Specifically, the constructed pre-trained correction model includes a deep convolution module, a bidirectional LSTM module, an attention module, a regularization module and a fully connected layer, where:

通过深度卷积模块对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列,深度卷积特征序列通过激活层进行激活。The deep convolution feature sequence is extracted from the input training set through the deep convolution module to obtain a deep convolution feature sequence, and the deep convolution feature sequence is activated through the activation layer.

通过双向LSTM模块对激活后的深度卷积特征序列分别进行正序输入和逆序输入,并在特征提取后进行拼接融合,得到新的特征序列。The activated deep convolution feature sequence is input in forward and reverse order through the bidirectional LSTM module, and is concatenated and fused after feature extraction to obtain a new feature sequence.

通过注意力模块对新的特征序列进行计算,得到注意力向量。The new feature sequence is calculated through the attention module to obtain the attention vector.

通过正则化模块对注意力向量进行正则化。The attention vector is regularized through the regularization module.

通过全连接层对正则化后的注意力向量进行整合,以生成最终的预测输出。The regularized attention vector is integrated through a fully connected layer to generate the final prediction output.

进一步具体地,深度卷积模块一般为5层卷积结构,也可以根据情况设置为更多层数,信息将通过卷积层进行卷积计算,从而获得输入信息的特征表示。因此卷积是核心组件,对于具有局部特征的数据,具有优秀的特征提取能力,建立具有5层卷积结构的深度卷积神经网络层用于提取特征,将个序列数据表示为/>,设卷积核大小为,/>表示为卷积核所在的层数,/>表示在当前层数卷积核的索引值,/>表示在上一层数卷积核的索引值,则通过深度卷积模块的前向传播得到深度卷积特征序列Specifically, the deep convolution module is generally a 5-layer convolution structure, and can also be set to more layers according to the situation. Information will be convolutionally calculated through the convolution layer to obtain the feature representation of the input information. Therefore, convolution is a core component. For data with local features, it has excellent feature extraction capabilities. A deep convolutional neural network layer with a 5-layer convolution structure is established to extract features. The sequence data is represented as /> , let the convolution kernel size be ,/> It is represented by the number of layers where the convolution kernel is located. /> Indicates the index value of the convolution kernel at the current layer, /> Represents the index value of the convolution kernel in the previous layer, and the deep convolution feature sequence is obtained through the forward propagation of the deep convolution module .

其中一个实施例中,深度卷积模块可用矩阵的形式可以表示为:In one embodiment, the deep convolution module can be expressed in the form of a matrix as follows:

;

式中,表示深度卷积特征序列;/>表示权重;/>表示输入的序列数据;/>表示偏置;其中,/>,/>In the formula, Represents a deep convolutional feature sequence; /> Indicates weight; /> Represents the input sequence data; /> represents the bias; where, /> ,/> .

而为了有效地获取更多维度的信息,需要在建模过程中引入非线性映射,以增强模型在更复杂空间中的语言表达能力,这种非线性映射被称为激活函数。In order to effectively obtain information in more dimensions, it is necessary to introduce nonlinear mapping in the modeling process to enhance the model's language expression ability in more complex spaces. This nonlinear mapping is called an activation function.

其中一个实施例中,激活层的计算公式如下所示:In one embodiment, the calculation formula of the activation layer is as follows:

;

将深度卷积特征序列输入到激活函数中得到激活后的激活函数的特征序列/>。可以理解,激活函数可以用来解决梯度消失的问题。The deep convolution feature sequence Input into the activation function to get the feature sequence of the activated activation function/> It can be understood that the activation function can be used to solve the problem of gradient disappearance.

其中一个实施例中,将激活函数的特征序列输入双向LSTM模块中,得到每一级隐藏层状态h t的计算公式如下:In one embodiment, the characteristic sequence of the activation function is Input into the bidirectional LSTM module, and the calculation formula for each level of hidden layer state h t is as follows:

;

式中,表示隐藏层状态;/>表示正序输入;/>表示逆序输入;/>和/>分别表示逆序隐藏层和逆序隐藏层的权重参数;/>表示隐藏层的偏置参数。In the formula, Represents the hidden layer state; /> Indicates positive sequence input; /> Indicates reverse order input; /> and/> Respectively represent the weight parameters of the reverse hidden layer and the reverse hidden layer; /> represents the bias parameter of the hidden layer.

可以理解,长短期记忆网络(LSTM)内部有三个“门”来控制历史信息与当前时刻信息舍弃与留存,分别为遗忘门,输入门和输出门。具体的计算过程为输入的序列数据先经过遗忘门,遗忘门借助上一时刻的长时记忆输入和短时记忆输入/>经过计算得到需要遗忘的信息记为/>,在进行输入门的计算得到需要保留的信息记为/>,并且还保存下当前时刻的记忆单元记为/>,最后计算输出门的输出值记为/>。结合遗忘门和输入门可以得到新的新的长期记忆单元/>,/>被输出门传递给tanh函数,通过sigmoid计算隐藏层输出值It can be understood that there are three "gates" inside the long short-term memory network (LSTM) to control the discarding and retention of historical information and current information, namely the forget gate, input gate and output gate. The specific calculation process is that the input sequence data first passes through the forget gate, and the forget gate uses the long-term memory input of the previous moment to and short-term memory input/> The information that needs to be forgotten after calculation is recorded as/> , the information that needs to be retained after calculating the input gate is recorded as/> , and also save the current memory unit as/> , and finally calculate the output value of the output gate as/> . Combining the forget gate and the input gate can get a new long-term memory unit/> ,/> The output gate is passed to the tanh function, and the hidden layer output value is calculated through sigmoid .

而双向长短期记忆网络结构(双向LSTM)可以分为前向长短期记忆网络与后向长短期记忆网络,输入序列分别以正序和逆序输入至两个网络进行特征提取,最后在进行拼接得到新的特征序列。The bidirectional long short-term memory network structure (bidirectional LSTM) can be divided into a forward long short-term memory network and a backward long short-term memory network. The input sequence is input into the two networks in forward and reverse order for feature extraction, and finally concatenated to obtain a new feature sequence.

其中一个实施例中,注意力模块的计算公式表示为:In one embodiment, the calculation formula of the attention module is expressed as:

;

式中,表示注意力向量;/>表示隐藏层状态;/>表示输入特征的权重;/>表示输入特征的偏置。In the formula, Represents the attention vector; /> Represents the hidden layer state; /> Represents the weight of the input feature; /> Represents the bias of the input feature.

可以理解,注意力机制允许模型在进行预测或生成输出时关注输入数据的不同部分,其可以直接获取全局与局部的联系,设置的参数更少,模型复杂程度更低。It can be understood that the attention mechanism allows the model to focus on different parts of the input data when making predictions or generating outputs. It can directly obtain the connection between the global and the local, set fewer parameters, and reduce the model complexity.

将双向LSTM模块得到的输出输入到注意力模块,利用softmax函数计算得出注意力向量/>The output of the bidirectional LSTM module Input to the attention module, and use the softmax function to calculate the attention vector/> .

其中一个实施例中,正则化模块的建立主要是用来处理训练过程中的不确定因素,减少神经网络中的过拟合问题,进一步提高ADCBiLSTM模型的精度。它通过在训练过程中随机丢弃(屏蔽)一部分神经元的输出来实现。正则化技术会在每个训练样本的前向传播过程中,以一定概率将某些神经元的输出置为0,从而将其“丢弃”。被丢弃的神经元在该次前向传播中不会对后续的神经元产生影响。这种集成的方式有助于减少过拟合,并提高模型的泛化能力。In one of the embodiments, the establishment of the regularization module is mainly used to deal with uncertainties in the training process, reduce the overfitting problem in the neural network, and further improve the accuracy of the ADCBiLSTM model. It is achieved by randomly discarding (shielding) the output of a part of the neurons during the training process. Regularization technology sets the output of certain neurons to 0 with a certain probability during the forward propagation of each training sample, thereby "discarding" them. The discarded neurons will not affect the subsequent neurons in this forward propagation. This integrated approach helps to reduce overfitting and improve the generalization ability of the model.

其中一个实施例中,全连接层在整个神经网络中起到“分类器”的作用。全连接层将注意力向量特征整合到一起,使得神经网络最终看到的特征是个全局特征,并将ADCBiLSTM模型的计算结果展开成为一维向量,以生成最终的预测输出,方便进行结果输出。In one embodiment, the fully connected layer acts as a "classifier" in the entire neural network. The fully connected layer integrates the attention vector features together, so that the features finally seen by the neural network are global features, and expands the calculation results of the ADCBiLSTM model into a one-dimensional vector to generate the final prediction output, which is convenient for result output.

步骤206,通过训练集及预先构建的损失函数对预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的训练集进行深度卷积特征序列提取,得到深度卷积特征序列;深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将新的特征序列通过注意力机制生成注意力向量,并对注意力向量进行正则化;将正则化的注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束。Step 206, training the pre-trained correction model through the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein, the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through the forward order input and the reverse order input and the splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through the attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate the final prediction output; at the same time, the generated prediction result process is guided and constrained by the pre-constructed loss function.

具体地,ADCBiLSTM模型在进行训练时,首先设置初始参数如下:Specifically, when training the ADCBiLSTM model, the initial parameters are first set as follows:

in_channels:向量的维度;out_channels:卷积产生的通道数;kernel_size:卷积核的大小;stride:步长;padding:填充的层数;input size:输入张量的最后一维的大小;output_size:输出张量的大小;hidden_size:隐藏状态和记忆单元的大小;num_layers:LSTM堆叠的层数; batch_first:输入和输出张量是否使用批次维度作为第一维;bidirectional:设置双向LSTM。learning rate:学习率;number epochs:训练轮数;Dropout:丢弃神经元的概率。in_channels: the dimension of the vector; out_channels: the number of channels generated by the convolution; kernel_size: the size of the convolution kernel; stride: the step size; padding: the number of padding layers; input size: the size of the last dimension of the input tensor; output_size: the size of the output tensor; hidden_size: the size of the hidden state and memory unit; num_layers: the number of layers of the LSTM stack; batch_first: whether the input and output tensors use the batch dimension as the first dimension; bidirectional: set bidirectional LSTM. learning rate: learning rate; number epochs: the number of training rounds; Dropout: the probability of discarding neurons.

然后开始训练,将训练集输入ADCBiLSTM模型中进行前向传播,计算损失函数。然后使用反向传播算法进行ADCBiLSTM模型训练,优化ADCBiLSTM模型的参数。训练过程循环上述流程,直到循环次数达到模型初始设置的循环次数。Then start training, input the training set into the ADCBiLSTM model for forward propagation, and calculate the loss function. Then use the backpropagation algorithm to train the ADCBiLSTM model and optimize the parameters of the ADCBiLSTM model. The training process repeats the above process until the number of cycles reaches the number of cycles initially set for the model.

训练过程中,通过预先构建的损失函数对生成的预测结果过程进行引导和约束,为了从不同的角度来衡量模型预测的精度,预先构建的损失函数包括平均绝对值误差、均方误差、均方根误差、确定系数及平均绝对百分比误差,其中:During the training process, the generated prediction results are guided and constrained by the pre-built loss function. In order to measure the accuracy of model prediction from different angles, the pre-built loss functions include mean absolute error, mean square error, root mean square error, coefficient of determination and mean absolute percentage error, where:

平均绝对值误差(MAE)表示预测值和观测值之间绝对误差的平均值,MAE是一种线性分数,所有个体差异在平均值上的权重都相等。平均绝对值误差表示为:The mean absolute error (MAE) represents the average of the absolute errors between the predicted and observed values. MAE is a linear score where all individual differences are equally weighted on the mean. The mean absolute error is expressed as:

;

式中,表示第/>个样本的预测值,/>表示第/>个样本的真实值。In the formula, Indicates the first/> The predicted value of samples, /> Indicates the first/> The true value of the samples.

均方误差(MSE)用于衡量模型预测值与实际观测值之间差异的指标,用于评估模型在给定数据上的拟合程度。均方误差表示为:The mean square error (MSE) is an indicator used to measure the difference between the model's predicted value and the actual observed value, and is used to evaluate the degree of fit of the model on the given data. The mean square error is expressed as:

;

均方根误差(RMSE)用于衡量模型预测值与实际观测值之间差异的指标,它用于评估模型在给定数据上的拟合程度。均方根误差表示为:The root mean square error (RMSE) is an indicator used to measure the difference between the model's predicted value and the actual observed value. It is used to evaluate the degree of fit of the model on the given data. The root mean square error is expressed as:

;

决定系数用于评估回归模型拟合优度的统计指标。它表示因变量的变异性能够由模型解释的比例,即模型对数据的拟合程度,表示为:decisive factor A statistical indicator used to evaluate the goodness of fit of a regression model. It indicates the proportion of the variability of the dependent variable that can be explained by the model, that is, the degree of fit of the model to the data, expressed as:

;

式中,表示样本均值。In the formula, Represents the sample mean.

平均绝对百分比误差(MAPE)为衡量预测准确性的统计指标,表示为:The mean absolute percentage error (MAPE) is a statistical indicator to measure the accuracy of predictions and is expressed as:

;

通过预先构建的这五种损失函数,可以从不同角度评价模型预测的精度,使模型生成的预测结果更加精确。Through these five pre-built loss functions, the accuracy of model prediction can be evaluated from different angles, making the prediction results generated by the model more accurate.

步骤208,将测试集输入所述固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。Step 208, input the test set into the solid rocket engine ignition process correction model to obtain the solid rocket engine ignition process prediction result.

具体地,将训练好的固体火箭发动机点火过程修正模型通过model.eval()函数改为测试模式,输入测试集进行测试。Specifically, the trained solid rocket engine ignition process correction model is changed to test mode through the model.eval() function, and the test set is input for testing.

得到的预测解结果为预测压强曲线,通过预测压强曲线绘制预测的压力曲线和仿真压力曲线,并计算出预测的误差值。The predicted solution result obtained is a predicted pressure curve. The predicted pressure curve and the simulated pressure curve are plotted through the predicted pressure curve, and the predicted error value is calculated.

上述固体火箭发动机点火过程模型修正方法,通过对获取的压强序列数据进行归一化处理,得到模型数据集,将模型数据集划分为训练集和测试集;然后构建固体火箭发动机点火过程的预训练修正模型;通过训练集及预先构建的损失函数对预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的训练集进行深度卷积特征序列提取,得到深度卷积特征序列;深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将新的特征序列通过注意力机制生成注意力向量,并对注意力向量进行正则化;将正则化的注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束;最后,将测试集输入固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。本发明构建固体火箭发动机点火过程修正模型从大量的实际压强序列数据中学习到模式和规律,无需依赖精确的物理模型,具有更好的适用性和泛化能力;另一方面,构建的固体火箭发动机点火过程修正模型通过深度卷积特征序列提取结合正序输入和逆序输入,可以有效提高固体火箭发动机点火过程修正模型的精度,并且,通过正则化来处理训练过程中的不确定因素,减少过拟合问题,进一步提高模型的精度,提高固体火箭发动机的设计效率,加快开发进行,减少成本;输出的预测结果,可更准确地预测固体火箭发动机在不同时间点的压力变化情况,从而为固体火箭发动机设计提供支撑。与流体仿真相比,本发明提供的ADCBiLSTM模型修改参数更便捷,无需进行网格划分等操作。The above-mentioned solid rocket engine ignition process model correction method obtains a model data set by normalizing the acquired pressure sequence data, and divides the model data set into a training set and a test set; then constructs a pre-trained correction model of the solid rocket engine ignition process; trains the pre-trained correction model through the training set and a pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein, the pre-trained correction model extracts a deep convolution feature sequence from the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through positive order input and reverse order input and splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through an attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate a final prediction output; at the same time, the generated prediction result process is guided and constrained by a pre-constructed loss function; finally, the test set is input into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. The present invention constructs a solid rocket engine ignition process correction model to learn patterns and laws from a large amount of actual pressure sequence data, without relying on an accurate physical model, and has better applicability and generalization ability; on the other hand, the constructed solid rocket engine ignition process correction model can effectively improve the accuracy of the solid rocket engine ignition process correction model by extracting deep convolution feature sequences combined with positive input and reverse input, and by regularization to deal with uncertain factors in the training process, reduce overfitting problems, further improve the accuracy of the model, improve the design efficiency of solid rocket engines, speed up development, and reduce costs; the output prediction results can more accurately predict the pressure changes of solid rocket engines at different time points, thereby providing support for the design of solid rocket engines. Compared with fluid simulation, the ADCBiLSTM model provided by the present invention is more convenient to modify parameters, and there is no need to perform operations such as grid division.

其中一个实施例中,为了验证本发明提供的ADCBiLSTM模型有效性,对其进行验证。In one of the embodiments, in order to verify the effectiveness of the ADCBiLSTM model provided by the present invention, it is verified.

首先,结合建立的ADCBiLSTM模型,进行如下设置:First, based on the established ADCBiLSTM model, make the following settings:

深度卷积模块共有5层结构,向量的维度设置为1,输出的卷积产生的通道数依次设置为64、32、16、8、4,所有层的卷积核大小为3,卷积的步长为1,填充为1。将序列数据提取特征后输入到双向LSTM模块中,网络的输入张量大小为4,输出张量大小为1,隐藏层为1,堆叠层数为1。之后进入注意力模块,注意力模块计算得到注意力向量,使用正则化模块来减小训练过程中的过拟合现象,丢弃率设置为0.1,最后进入全连接层中输出训练的结果。将训练的学习率设置为0.001,训练轮数设置为200轮。The deep convolution module has a total of 5 layers. The dimension of the vector is set to 1. The number of channels generated by the output convolution is set to 64, 32, 16, 8, and 4 respectively. The convolution kernel size of all layers is 3, the convolution step is 1, and the padding is 1. After extracting features from the sequence data, it is input into the bidirectional LSTM module. The input tensor size of the network is 4, the output tensor size is 1, the hidden layer is 1, and the number of stacking layers is 1. Then enter the attention module, which calculates the attention vector. The regularization module is used to reduce overfitting during training. The drop rate is set to 0.1. Finally, the training results are output in the fully connected layer. The learning rate of training is set to 0.001, and the number of training rounds is set to 200 rounds.

然后,将压强序列数据进行归一化处理后,划分为训练集和测试集;通过训练集及预先构建的损失函数对预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型。训练的优化器选择Adam,损失函数选择MAE,如图3所示,为绘制的损失函数图像。Then, the pressure sequence data is normalized and divided into a training set and a test set. The pre-trained correction model is trained using the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model. Adam is selected as the optimizer for training, and MAE is selected as the loss function. As shown in Figure 3, this is the plotted loss function image.

最后,将测试集输入固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果,即如图4所示,输出预测的压强曲线。Finally, the test set is input into the solid rocket engine ignition process correction model to obtain the solid rocket engine ignition process prediction result, as shown in Figure 4, and the predicted pressure curve is output.

重复实验5次,计算预测结果的误差平均值,以减少随机性,如表1所示。The experiment was repeated 5 times and the average error of the prediction results was calculated to reduce randomness, as shown in Table 1.

表1Table 1

在测试样本上的5次预测误差平均值可以MAE=0.0063,MSE=0.00009,RMSE=0.0095,R2=0.9991,MAPE=3.05%。表明本发明所提出的固体火箭发动机点火过程模型修正方法、装置及设备具有较高的精度。The average values of the 5 prediction errors on the test samples are MAE=0.0063, MSE=0.00009, RMSE=0.0095, R 2 =0.9991, and MAPE=3.05%, indicating that the solid rocket engine ignition process model correction method, device and equipment proposed in the present invention have high accuracy.

应该理解的是,虽然图1的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that, although the various steps in the flowchart of FIG. 1 are shown in sequence according to the indication of the arrows, these steps are not necessarily executed in sequence according to the order indicated by the arrows. Unless there is a clear explanation in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least a portion of the steps in FIG. 1 may include a plurality of sub-steps or a plurality of stages, and these sub-steps or stages are not necessarily executed at the same time, but can be executed at different times, and the execution order of these sub-steps or stages is not necessarily to be carried out in sequence, but can be executed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

在一个实施例中,如图5所示,提供了一种固体火箭发动机点火过程模型修正装置,包括:数据处理模块402、模型构建模块404、模型预训练模块406和预测结果生成模块408,其中:In one embodiment, as shown in FIG5 , a solid rocket engine ignition process model correction device is provided, comprising: a data processing module 402, a model building module 404, a model pre-training module 406 and a prediction result generation module 408, wherein:

数据处理模块402,用于获取压强序列数据,对压强序列数据进行归一化处理后,得到模型数据集,将模型数据集划分为训练集和测试集。The data processing module 402 is used to obtain pressure sequence data, normalize the pressure sequence data, obtain a model data set, and divide the model data set into a training set and a test set.

模型构建模块404,用于构建固体火箭发动机点火过程的预训练修正模型。The model building module 404 is used to build a pre-trained correction model of the solid rocket engine ignition process.

模型预训练模块406,用于通过训练集及预先构建的损失函数对预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列;深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将新的特征序列通过注意力机制生成注意力向量,并对注意力向量进行正则化;将正则化的注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束。The model pre-training module 406 is used to train the pre-trained correction model through the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein, the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through the forward input and the reverse input respectively and the splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through the attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate the final prediction output; at the same time, the generated prediction result process is guided and constrained by the pre-constructed loss function.

预测结果生成模块408, 用于将测试集输入所述固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。The prediction result generation module 408 is used to input the test set into the solid rocket engine ignition process correction model to obtain the solid rocket engine ignition process prediction result.

关于固体火箭发动机点火过程模型修正装置的具体限定可以参见上文中对于固体火箭发动机点火过程模型修正方法的限定,在此不再赘述。上述固体火箭发动机点火过程模型修正装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。The specific definition of the solid rocket engine ignition process model correction device can be found in the above definition of the solid rocket engine ignition process model correction method, which will not be repeated here. The various modules in the above-mentioned solid rocket engine ignition process model correction device can be implemented in whole or in part through software, hardware and their combination. The above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, or can be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.

在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图6所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储固体火箭发动机点火过程模型修正数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种固体火箭发动机点火过程模型修正方法。In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be shown in FIG6 . The computer device includes a processor, a memory, a network interface, and a database connected via a system bus. The processor of the computer device is used to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The database of the computer device is used to store solid rocket engine ignition process model correction data. The network interface of the computer device is used to communicate with an external terminal via a network connection. When the computer program is executed by the processor, a solid rocket engine ignition process model correction method is implemented.

本领域技术人员可以理解,图6中示出的结构,仅仅是与本发明方案相关的部分结构的框图,并不构成对本发明方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art will understand that the structure shown in FIG. 6 is merely a block diagram of a partial structure related to the solution of the present invention, and does not constitute a limitation on the computer device to which the solution of the present invention is applied. The specific computer device may include more or fewer components than those shown in the figure, or combine certain components, or have a different arrangement of components.

在一个实施例中,提供了一种计算机设备,包括存储器和处理器,该存储器存储有计算机程序,该处理器执行计算机程序时实现以下步骤:In one embodiment, a computer device is provided, including a memory and a processor, wherein the memory stores a computer program, and when the processor executes the computer program, the following steps are implemented:

步骤202,获取压强序列数据,对压强序列数据进行归一化处理后,得到模型数据集,将模型数据集划分为训练集和测试集。Step 202, obtaining pressure sequence data, normalizing the pressure sequence data to obtain a model data set, and dividing the model data set into a training set and a test set.

步骤204,构建固体火箭发动机点火过程的预训练修正模型。Step 204, construct a pre-trained correction model for the solid rocket motor ignition process.

步骤206,通过训练集及预先构建的损失函数对预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的训练集进行深度卷积特征序列提取,得到深度卷积特征序列;深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将新的特征序列通过注意力机制生成注意力向量,并对注意力向量进行正则化;将正则化的注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束。Step 206, training the pre-trained correction model through the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein, the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through the forward order input and the reverse order input and the splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through the attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate the final prediction output; at the same time, the generated prediction result process is guided and constrained by the pre-constructed loss function.

步骤208,将所述测试集输入所述固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。Step 208: input the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本发明所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment method can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage medium. When the computer program is executed, it can include the processes of the embodiments of the above-mentioned methods. Among them, any reference to memory, storage, database or other media used in the embodiments provided by the present invention can include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM) or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments may be arbitrarily combined. To make the description concise, not all possible combinations of the technical features in the above embodiments are described. However, as long as there is no contradiction in the combination of these technical features, they should be considered to be within the scope of this specification.

以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本发明范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明的保护范围应以所附权利要求为准。The above-described embodiments only express several implementation methods of the present invention, and the description thereof is relatively specific and detailed, but it cannot be understood as limiting the scope of the present invention. It should be pointed out that, for a person of ordinary skill in the art, several modifications and improvements can be made without departing from the concept of the present invention, and these all belong to the protection scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the attached claims.

Claims (9)

1.一种固体火箭发动机点火过程模型修正方法,其特征在于,所述方法包括:1. A solid rocket engine ignition process model correction method, characterized in that the method comprises: 获取压强序列数据,对所述压强序列数据进行归一化处理后,得到模型数据集,将所述模型数据集划分为训练集和测试集;Acquire pressure sequence data, perform normalization processing on the pressure sequence data to obtain a model data set, and divide the model data set into a training set and a test set; 构建固体火箭发动机点火过程的预训练修正模型;Construct a pre-trained correction model for the solid rocket motor ignition process; 通过所述训练集及预先构建的损失函数对所述预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列;所述深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将所述新的特征序列通过注意力机制生成注意力向量,并对所述注意力向量进行正则化;将正则化的所述注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束;The pre-trained correction model is trained by the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through positive order input and reverse order input respectively and splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through an attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate a final prediction output; at the same time, the generated prediction result process is guided and constrained by the pre-constructed loss function; 将所述测试集输入所述固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。The test set is input into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. 2.根据权利要求1所述的固体火箭发动机点火过程模型修正方法,其特征在于,所述预训练修正模型包括深度卷积模块、双向LSTM模块、注意力模块、正则化模块及全连接层;2. The solid rocket engine ignition process model correction method according to claim 1, characterized in that the pre-training correction model includes a deep convolution module, a bidirectional LSTM module, an attention module, a regularization module and a fully connected layer; 通过所述深度卷积模块对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列,并对所述深度卷积特征序列激活;Performing deep convolution feature sequence extraction on the input training set through the deep convolution module to obtain a deep convolution feature sequence, and activating the deep convolution feature sequence; 通过所述双向LSTM模块对激活后的所述深度卷积特征序列分别进行正序输入和逆序输入,并在特征提取后进行拼接融合,得到新的特征序列;The activated deep convolution feature sequence is input in forward order and reverse order respectively through the bidirectional LSTM module, and concatenated and fused after feature extraction to obtain a new feature sequence; 通过所述注意力模块对所述新的特征序列进行计算,得到注意力向量;The new feature sequence is calculated by the attention module to obtain an attention vector; 通过所述正则化模块对所述注意力向量进行正则化;Regularizing the attention vector by the regularization module; 通过所述全连接层对正则化后的所述注意力向量进行整合,以生成最终的预测输出。The regularized attention vector is integrated through the fully connected layer to generate the final prediction output. 3.根据权利要求2所述的固体火箭发动机点火过程模型修正方法,其特征在于,通过所述深度卷积模块的计算公式表示为:3. The solid rocket engine ignition process model correction method according to claim 2 is characterized in that the calculation formula of the deep convolution module is expressed as: ; 式中,表示深度卷积特征序列;/>表示权重;/>表示输入的序列数据;/>表示偏置;其中,/>,/>In the formula, Represents a deep convolutional feature sequence; /> Indicates weight; /> Represents the input sequence data; /> represents the bias; where, /> ,/> . 4.根据权利要求3所述的固体火箭发动机点火过程模型修正方法,其特征在于,所述深度卷积模块包括5层卷积结构。4. The solid rocket engine ignition process model correction method according to claim 3 is characterized in that the deep convolution module includes a 5-layer convolution structure. 5.根据权利要求2所述的固体火箭发动机点火过程模型修正方法,其特征在于,通过所述双向LSTM模块的计算公式表示为:5. The solid rocket engine ignition process model correction method according to claim 2, characterized in that the calculation formula of the bidirectional LSTM module is expressed as: ; 式中,表示隐藏层状态;/>表示正序输入;/>表示逆序输入;/>和/>分别表示逆序隐藏层和逆序隐藏层的权重参数;/>表示隐藏层的偏置参数;/>表示激活函数的特征序列。In the formula, Represents the hidden layer state; /> Indicates positive sequence input; /> Indicates reverse order input; /> and/> Respectively represent the weight parameters of the reverse hidden layer and the reverse hidden layer; /> Represents the bias parameter of the hidden layer; /> A sequence of features representing the activation function. 6.根据权利要求2所述的固体火箭发动机点火过程模型修正方法,其特征在于,所述注意力模块的计算公式表示为:6. The solid rocket engine ignition process model correction method according to claim 2, characterized in that the calculation formula of the attention module is expressed as: ; 式中,表示注意力向量;/>表示隐藏层状态;/>表示输入特征的权重;/>表示输入特征的偏置。In the formula, Represents the attention vector; /> Represents the hidden layer state; /> Represents the weight of the input feature; /> Represents the bias of the input feature. 7.根据权利要求1或2所述的固体火箭发动机点火过程模型修正方法,其特征在于,预先构建的损失函数包括平均绝对误差、均方误差、均方根误差、确定系数及平均绝对百分比误差。7. The solid rocket engine ignition process model correction method according to claim 1 or 2 is characterized in that the pre-constructed loss function includes mean absolute error, mean square error, root mean square error, determination coefficient and mean absolute percentage error. 8.一种固体火箭发动机点火过程模型修正装置,其特征在于,所述装置包括:8. A solid rocket engine ignition process model correction device, characterized in that the device comprises: 数据处理模块,用于获取压强序列数据,对所述压强序列数据进行归一化处理后,得到模型数据集,将所述模型数据集划分为训练集和测试集;A data processing module, used for acquiring pressure sequence data, performing normalization processing on the pressure sequence data to obtain a model data set, and dividing the model data set into a training set and a test set; 模型构建模块,用于构建固体火箭发动机点火过程的预训练修正模型;A model building module, which is used to build a pre-trained correction model of the solid rocket motor ignition process; 模型预训练模块,用于通过所述训练集及预先构建的损失函数对所述预训练修正模型进行训练,得到训练好的固体火箭发动机点火过程修正模型;其中,预训练修正模型对输入的所述训练集进行深度卷积特征序列提取,得到深度卷积特征序列;所述深度卷积特征序列激活后,分别通过正序输入和逆序输入进行特征提取并进行拼接融合,得到新的特征序列;将所述新的特征序列通过注意力机制生成注意力向量,并对所述注意力向量进行正则化;将正则化的所述注意力向量进行整合后,以生成最终的预测输出;同时,通过预先构建的损失函数对生成的预测结果过程进行引导和约束;A model pre-training module is used to train the pre-trained correction model through the training set and the pre-constructed loss function to obtain a trained solid rocket engine ignition process correction model; wherein the pre-trained correction model performs deep convolution feature sequence extraction on the input training set to obtain a deep convolution feature sequence; after the deep convolution feature sequence is activated, feature extraction is performed through positive order input and reverse order input respectively and splicing and fusion are performed to obtain a new feature sequence; the new feature sequence is used to generate an attention vector through an attention mechanism, and the attention vector is regularized; the regularized attention vector is integrated to generate a final prediction output; at the same time, the generated prediction result process is guided and constrained by the pre-constructed loss function; 预测结果生成模块, 用于将所述测试集输入所述固体火箭发动机点火过程修正模型,得到固体火箭发动机点火过程预测结果。A prediction result generation module is used to input the test set into the solid rocket engine ignition process correction model to obtain a solid rocket engine ignition process prediction result. 9.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述方法的步骤。9. A computer device comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program.
CN202410569955.2A 2024-05-09 2024-05-09 Solid rocket engine ignition process model correction method, device and equipment Active CN118153459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410569955.2A CN118153459B (en) 2024-05-09 2024-05-09 Solid rocket engine ignition process model correction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410569955.2A CN118153459B (en) 2024-05-09 2024-05-09 Solid rocket engine ignition process model correction method, device and equipment

Publications (2)

Publication Number Publication Date
CN118153459A true CN118153459A (en) 2024-06-07
CN118153459B CN118153459B (en) 2024-08-06

Family

ID=91285455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410569955.2A Active CN118153459B (en) 2024-05-09 2024-05-09 Solid rocket engine ignition process model correction method, device and equipment

Country Status (1)

Country Link
CN (1) CN118153459B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116481630A (en) * 2023-04-03 2023-07-25 北京科技大学 A Reconstruction Method of Jet Transient Sound Field Based on Equivalent Source and Convolutional Network
US20240054329A1 (en) * 2022-08-15 2024-02-15 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for a bayesian spatiotemporal graph transformer network for multi-aircraft trajectory prediction
CN117763933A (en) * 2023-02-28 2024-03-26 沈阳航空航天大学 Solid rocket engine time sequence parameter prediction method and prediction system based on deep learning
WO2024087129A1 (en) * 2022-10-24 2024-05-02 大连理工大学 Generative adversarial multi-head attention neural network self-learning method for aero-engine data reconstruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240054329A1 (en) * 2022-08-15 2024-02-15 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for a bayesian spatiotemporal graph transformer network for multi-aircraft trajectory prediction
WO2024087129A1 (en) * 2022-10-24 2024-05-02 大连理工大学 Generative adversarial multi-head attention neural network self-learning method for aero-engine data reconstruction
CN117763933A (en) * 2023-02-28 2024-03-26 沈阳航空航天大学 Solid rocket engine time sequence parameter prediction method and prediction system based on deep learning
CN116481630A (en) * 2023-04-03 2023-07-25 北京科技大学 A Reconstruction Method of Jet Transient Sound Field Based on Equivalent Source and Convolutional Network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
(保)伊凡·瓦西列夫作: "《智能系统与技术丛书Python深度学习模型方法与实现》", 30 September 2021, 机械工业出版社, pages: 264 *
AI信仰者: ""cnn+lstm+attention对时序数据进行预测"", 《HTTPS://WWW.JIANSHU.COM/P/3AABEB7A128B》, 22 February 2023 (2023-02-22), pages 1 - 4 *
聂侥;吴建军;: "基于误差预测修正的液体火箭发动机故障预测方法研究", 推进技术, no. 08, 28 July 2016 (2016-07-28), pages 172 - 181 *
陈宇,雷春著: "《人工智能在教育治理中的应用与发展》", 31 December 2021, 华中科技大学出版社, pages: 117 *

Also Published As

Publication number Publication date
CN118153459B (en) 2024-08-06

Similar Documents

Publication Publication Date Title
KR102532658B1 (en) Neural architecture search
US20190026639A1 (en) Neural architecture search for convolutional neural networks
US20190278835A1 (en) Abstractive summarization of long documents using deep learning
CN110490323A (en) Network model compression method, device, storage medium and computer equipment
CN109523014B (en) News comment automatic generation method and system based on generative confrontation network model
CN112949089B (en) A Recognition Method of Aquifer Structure Inversion Based on Discrete Convolutional Residual Network
CN113221645B (en) Target model training method, face image generating method and related device
CN108879732B (en) Power system transient stability assessment method and device
CN118094343B (en) Attention mechanism-based LSTM machine residual service life prediction method
JPWO2020188971A1 (en) Feature estimation method, feature estimation device, program and recording medium
CN111428869A (en) Model generation method and device, computer equipment and storage medium
CN109885830A (en) Sentence interpretation method, device, computer equipment
Jiang et al. Leanreasoner: Boosting complex logical reasoning with lean
CN116722548A (en) A photovoltaic power generation prediction method and related equipment based on time series model
CN115017819B (en) A method and device for predicting remaining service life of an engine based on a hybrid model
CN118153459A (en) Solid rocket engine ignition process model correction method, device and equipment
CN118644486A (en) Crowd counting method and system based on dual-path multi-scale fusion network
CN111475931A (en) PCI rapid analysis method and device based on neural network, electronic equipment and storage medium
CN116822120A (en) Dynamic evolution method and system of gas turbine mechanism simulation model
CN111008455B (en) Method and system for generating mid-term wind power scenarios
CN115879615A (en) Method, device, equipment and medium for predicting output parameters of photovoltaic module
US12361305B2 (en) Neural architecture search for convolutional neural networks
CN117634652B (en) Interpretable prediction method of dam deformation based on machine learning
CN115688229B (en) Method for creating most unfavorable defect mode of reticulated shell structure based on deep learning
CN118939954B (en) Data Missing Reconstruction Method Based on Self-evolving Perturbation-aware Network

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant