CN113269363B - Trajectory prediction method, system, equipment and medium of hypersonic aircraft - Google Patents
Trajectory prediction method, system, equipment and medium of hypersonic aircraft Download PDFInfo
- Publication number
- CN113269363B CN113269363B CN202110603010.4A CN202110603010A CN113269363B CN 113269363 B CN113269363 B CN 113269363B CN 202110603010 A CN202110603010 A CN 202110603010A CN 113269363 B CN113269363 B CN 113269363B
- Authority
- CN
- China
- Prior art keywords
- layer
- track
- model
- prediction
- decoder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 239000013598 vector Substances 0.000 claims abstract description 45
- 238000010606 normalization Methods 0.000 claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims description 34
- 230000006870 function Effects 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 15
- 230000004913 activation Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 13
- 230000014509 gene expression Effects 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 5
- 238000011478 gradient descent method Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 abstract description 9
- 239000010410 layer Substances 0.000 description 47
- 238000010586 diagram Methods 0.000 description 17
- 238000004422 calculation algorithm Methods 0.000 description 13
- 230000007246 mechanism Effects 0.000 description 11
- 210000002569 neuron Anatomy 0.000 description 8
- 238000012360 testing method Methods 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000007123 defense Effects 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 241000288105 Grus Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 241001634781 Empidornis semipartitus Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Operations Research (AREA)
- Game Theory and Decision Science (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a track prediction method, a system, equipment and a medium of a hypersonic aircraft, wherein the method comprises the following steps: acquiring a historical track sequence of the hypersonic aircraft to be subjected to track prediction, and carrying out normalization processing to obtain a normalized historical track sequence; the history track sequence comprises feature vectors at a plurality of preset moments, and the feature vector at each moment comprises a plurality of preset features; inputting the normalized historical track sequence into a pre-trained track prediction model to obtain a model output result; performing inverse normalization processing on the model output result to obtain a final track prediction result; wherein the trajectory prediction model comprises: an input layer; a codec network, comprising: an encoder, an attention module, and a decoder; and an output layer. The method is based on deep learning, and can ensure the real-time property of prediction.
Description
Technical Field
The invention belongs to the technical field of aircraft guidance and control, and particularly relates to a track prediction method, a track prediction system, track prediction equipment and track prediction medium for a hypersonic aircraft.
Background
The hypersonic aircraft is an aircraft with the flight Mach number larger than 5, the hypersonic gliding aircraft is one of hypersonic aircraft, and the hypersonic aircraft only depends on aerodynamic force to remotely fly in a nearby space in a gliding section, has the characteristics of hypersonic speed, high maneuverability and large range, is important weaponry for coping with future space operations, breaking through missile defense systems and realizing global rapid and accurate striking. With the continuous progress of missile defense technology, multi-level and multi-stage ballistic missile defense systems are gradually formed all over the world. The ballistic ability of conventional ballistic missiles gradually declines and the strategic deterrence continues to decline. Hypersonic aircrafts hopefully and efficiently break through missile defense systems to realize rapid global strike. In order to enhance the ability of the missile defense system to combat hypersonic gliding style aircraft, a method of hypersonic gliding style aircraft trajectory prediction must be established.
The existing track prediction algorithm of hypersonic gliding type aircraft is mainly based on single-model or multi-model maneuvering target track prediction, state estimation is obtained through a tracking algorithm, and then extrapolation algorithms such as an analytic method or a numerical integration method are used for carrying out trajectory prediction.
Most of single-model tracking hypersonic aircraft prediction algorithms are traditional target track prediction algorithms based on Kalman Filter (KF), extended Kalman Filter (Extended Kalman filtering, EKF), unscented Kalman Filter (Unscented Kalman filtering, UKF) and volume Kalman Filter (Cubature Kalman filtering, CKF), however, the algorithms have the technical problems of inaccurate acceleration estimation of hypersonic gliding aircraft, difficulty in long-time tracking prediction and the like. The multi-Model fusion algorithm prediction mainly comprises interactive multi-Model (IMM) tracking prediction; compared with single-model prediction, the multi-model fusion prediction can more comprehensively identify maneuvering modes so as to accurately predict maneuvering targets. The IMM algorithm is designed with a plurality of different target motion models which are respectively used for matching the motion states of the targets at different moments, the Markov chain is used for representing the state transition probability among the models, and the plurality of models with Markov coefficients are used for expressing the maneuvering modes of the targets, so that the defect of single-model target tracking is solved, and the problem of model switching during target maneuvering is solved. While the IMM algorithm theoretically ensures the estimation precision of multi-model tracking and realizes the self-adaptation of a model set, the maneuvering model needs to be determined by fitting an empirical value, the threshold setting is often changed when the model set is switched, and meanwhile, in order to cover all possible maneuvering models as much as possible for hypersonic gliding aircrafts with strong maneuverability, a very huge model set needs to be designed, so that the calculation of the IMM is time-consuming, model competition is possible, and the instantaneity and tracking prediction performance of the model set are not guaranteed. Thus, tracking predictions for hypersonic gliding style aircraft using traditional modes face significant difficulties and challenges.
Disclosure of Invention
The invention aims to provide a track prediction method, a track prediction system, track prediction equipment and track prediction media for a hypersonic aircraft, so as to solve one or more of the technical problems. The method is based on deep learning, and can ensure the real-time property of prediction.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the invention discloses a track prediction method of a hypersonic aircraft, which comprises the following steps of:
acquiring a historical track sequence of the hypersonic aircraft to be subjected to track prediction, and carrying out normalization processing to obtain a normalized historical track sequence; the history track sequence comprises feature vectors at a plurality of preset moments, and the feature vector at each moment comprises a plurality of preset features;
inputting the normalized historical track sequence into a pre-trained track prediction model to obtain a model output result;
performing inverse normalization processing on the model output result to obtain a final track prediction result;
wherein the trajectory prediction model comprises:
the input layer is used for inputting the normalized historical track sequence, carrying out feature dimension expansion on the feature vector at each moment in the historical track sequence, and learning the relation among different features to obtain a dimension-expanded learned historical track sequence;
a codec network, comprising: an encoder, an attention module, and a decoder;
the encoder is used for inputting the history track sequence after the dimension expansion learning and extracting time dimension characteristics to obtain a context vector;
the attention module is used for obtaining the output of each moment of the encoder and carrying out weighted average to obtain an attention value;
the decoder is used for inputting the context vector and the attention value, and decoding the context vector by combining the attention value to obtain an initial model prediction result;
and the output layer is used for carrying out dimension adjustment on the initial model prediction result according to a preset dimension output requirement to obtain a model output result.
The invention further improves that the obtaining step of the pre-trained track prediction model specifically comprises the following steps:
acquiring historical track sequences of a plurality of hypersonic aircrafts and carrying out normalization processing to form a training sample set;
training the track prediction model based on a gradient descent method by adopting the training sample set to obtain a trained track prediction model;
wherein, dropout layers are added in each layer of the encoder and the decoder, and Dropout is used during training;
during training, the Teacher forming strategy is added when the decoder decodes, and the true value output at the last moment is used as the input of the decoder.
The invention is further improved in that the expression of the normalization process is:
wherein X is mean X is the mean value in the sample data std Is the standard deviation of sample data, X is the original data, X scaler Is normalized data.
The invention is further improved in that the expression of the inverse normalization process is as follows:
Y=Y scaler ·X std +X mean ,
wherein Y is scaler And representing the model output result, wherein Y is the final track prediction result.
A further improvement of the present invention is that both the encoder and decoder use a three-layer gated loop unit.
A further improvement of the present invention is that, in the attention module, the expression for calculating the attention distribution is:
in the method, in the process of the invention,representing an attention scoring function; x is x i An output representing the i-th time of the encoder; h is a t Indicating the hidden state of the decoder at the i-th moment; w (W) x 、W h B, V are parameters to be learned;
the expression for calculating the obtained attention value is:
the invention further improves that in the encoding and decoding network, a feedforward layer and an Add & Norm layer are also arranged between the context vector C output by the encoder and the context vector C' input by the decoder;
the feedforward layer is a two-layer full-connection layer, the activation function of the first layer is ReLu, and the second layer does not use the activation function;
the Add & Norm layer comprises two parts, namely Add and Norm; where Add is the residual linkage, norm refers to Batch Normalization.
The invention relates to a track prediction system of hypersonic aircraft, comprising:
the preprocessing module is used for acquiring a history track sequence of the hypersonic aircraft to be subjected to track prediction and carrying out normalization processing to acquire a history track sequence after normalization processing; the history track sequence comprises feature vectors at a plurality of preset moments, and the feature vector at each moment comprises a plurality of preset features;
the model output result acquisition module is used for inputting the normalized historical track sequence into a pre-trained track prediction model to obtain a model output result;
the track prediction result acquisition module is used for carrying out inverse normalization processing on the model output result to obtain a final track prediction result;
wherein the trajectory prediction model comprises:
the input layer is used for inputting the normalized historical track sequence, carrying out feature dimension expansion on the feature vector at each moment in the historical track sequence, and learning the relation among different features to obtain a dimension-expanded learned historical track sequence;
a codec network, comprising: an encoder, an attention module, and a decoder;
the encoder is used for inputting the history track sequence after the dimension expansion learning and extracting time dimension characteristics to obtain a context vector;
the attention module is used for obtaining the output of each moment of the encoder and carrying out weighted average to obtain an attention value;
the decoder is used for inputting the context vector and the attention value, and decoding the context vector by combining the attention value to obtain an initial model prediction result;
and the output layer is used for carrying out dimension adjustment on the initial model prediction result to obtain a model output result.
An electronic apparatus of the present invention includes: a processor; a memory for storing computer program instructions; it is characterized in that the method comprises the steps of,
when the computer program instructions are loaded and run by the processor, the processor executes the track prediction method of the hypersonic aircraft.
The invention provides a computer readable storage medium, which stores computer program instructions, and is characterized in that when the computer program instructions are loaded and run by a processor, the processor executes the track prediction method of the hypersonic aircraft.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a hypersonic gliding type aircraft track prediction method based on deep learning, which provides a new solution idea for the application of the deep learning to the hypersonic type aircraft track prediction; the method can ensure the real-time performance of prediction, train the model on line, and directly use the model in the prediction. The method adopts the encoder and decoder framework in the deep learning to predict the track of the hypersonic aircraft, adds a attention mechanism into the model, improves the utilization of the model to input information, and can predict the track for a certain period of time after detecting the track sequence of the hypersonic aircraft within a preset input time length in actual use, wherein the error obtained by predicting the track is in kilometer level.
The invention provides a new idea: the method is based on deep learning, and enters practical use in many directions in life, but the research on hypersonic aircraft trajectory prediction is very little, and the invention provides a novel method for solving the hypersonic aircraft trajectory prediction problem.
The method of the invention has the advantages of less time consumption: the track prediction method of the hypersonic gliding aircraft mainly adopts an offline mode, the model is trained offline, the prediction result can be obtained directly through matrix calculation online, and iteration is not needed.
The method has less labor participation: the method provided by the invention does not need to deeply study the motion characteristics of the hypersonic gliding aircraft, and only needs to be roughly understood.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description of the embodiments or the drawings used in the description of the prior art will make a brief description; it will be apparent to those of ordinary skill in the art that the drawings in the following description are of some embodiments of the invention and that other drawings may be derived from them without undue effort.
FIG. 1 is a schematic diagram of an encoder/decoder configuration in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of the attention mechanism flow in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a track forecast process according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a construction using Teacher forming in an embodiment of the invention;
FIG. 5 is a schematic illustration of a jump glide path in accordance with an embodiment of the present invention;
FIG. 6 is a graph showing the change of the earth's center distance with time according to the embodiment of the present invention;
FIG. 7 is a schematic view of an X-Y-Z projection direction velocity profile in an embodiment of the invention;
FIG. 8 is a schematic diagram of simulation results in an embodiment of the present invention; fig. 8 (a) is a schematic diagram of an X-direction prediction result, fig. 8 (b) is a schematic diagram of a Y-direction prediction result, fig. 8 (c) is a schematic diagram of a Z-direction prediction result, fig. 8 (d) is a schematic diagram of a three-dimensional prediction error, and fig. 8 (e) is a schematic diagram of a geocentric distance prediction result;
FIG. 9 is a schematic representation of three-dimensional trajectory prediction in an embodiment of the present invention;
FIG. 10 is an enlarged schematic diagram of three-bit trajectory prediction in an embodiment of the present invention.
Detailed Description
In order to make the purposes, technical effects and technical solutions of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention are clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; it will be apparent that the described embodiments are some of the embodiments of the present invention. Other embodiments, which may be made by those of ordinary skill in the art based on the disclosed embodiments without undue burden, are within the scope of the present invention.
At present, a hypersonic gliding type aircraft is rarely studied in track prediction by using a deep learning method, and the prediction on tracks of pedestrians, vehicles, ships and the like is mainly focused. The track prediction based on the deep learning method does not need to completely grasp the motion characteristics of the target, and if some motion mechanism information about the target exists, the method is helpful for establishing a deep learning model. After the model is built, only a large amount of data is needed for training, the model automatically learns the relation between different characteristics of the data and the motion mechanism hidden behind the data, and then track prediction is carried out. The training process of the model is carried out on line, and when the prediction is carried out, the model can obtain a final result only by carrying out simple calculation, so that the real-time property of the prediction can be ensured.
In the track prediction method of the hypersonic gliding aircraft, provided by the embodiment of the invention, a track prediction model of a Seq2Seq based on an attention mechanism is designed, and mainly comprises the following steps: the data preprocessing, the design of the model and the training process are as follows:
(1) Data preprocessing
The input data of the model is a historical track sequence of the aircraft, and the input data is a track sequence of a certain time period in the future. The trajectory Traj is a time sequence of T moment trajectory point observation state vectors, and is expressed as: traj= { k 1 ,k 2 ,...,k i ,...,k T -a }; state vector k at time t t Is characteristic information, including position, speed and earth's center distance, and can be expressed as: k (k) t =[x t ,y t ,z t ,Vx t ,Vy t ,Vz t ,r]The method comprises the steps of carrying out a first treatment on the surface of the Wherein x is t ,y t ,z t Representing three-dimensional position projection under geocentric inertial coordinate system, vx t ,Vy t ,Vz t Representing a velocity projection, r represents the distance from the locus point to the earth's center.
Aiming at the network training by using the gradient descent method, the input data needs to be normalized, so that the jitter in the gradient descent process can be prevented, and the training speed can be accelerated. The data normalization method adopted by the embodiment of the invention is standardized, and the specific formula is as follows:
wherein X is mean X is the mean value in the sample data std Is the standard deviation of sample data, X is the original data, X scaler Is normalized data.
After the model is predicted to obtain a result, inverse normalization is needed to enable the predicted result to be mapped to a numerical value in a real space, namely:
Y=Y scaler ·X std +X mean ,
wherein Y is scaler And representing a predicted result obtained by the model, wherein Y is a restored real sequence value.
(2) Seq2Seq model based on attention mechanism
In the embodiment of the invention, the overall structure of the Seq2Seq model based on the attention mechanism and the functions of each module are mainly introduced. Wherein the overall framework adopts an encoder-decoder framework, and the encoder and the decoder use three layers of gating circulating units (Gated Recurrent Neural Network, GRU) for extracting characteristic relations between data and relations between sequence front and sequence back. An attention module is added between the encoder and the decoder, which can alleviate the shortage of information used by the decoder in decoding and also help to resolve specific position information positioned to an input sequence in decoding.
The GRU in the embodiments of the invention is a typical structure in a recurrent neural network (Recurrent Neural Network, RNN). A simple RNN can theoretically establish a dependency between states at long time intervals, but due to gradient explosion or disappearance problems, only short-term dependencies can be learned in practice, while a GRU can well alleviate gradient disappearance and gradient explosion problems. There are only two gates in the GRU model, the update gate and the reset gate, respectively. The update gate mainly controls how much of the state information of the previous time is brought into the current state, and the larger the value of the update gate is, the more information of the previous time is possessed. The reset gate is used to control how much information was written to the current candidate set in the previous stateThe smaller the reset gate, the less information of the previous state is written.
The following formula shows how the GRU works:
wherein W is xz 、W hz 、W xr 、W hr 、W xg 、W hg And b z 、b r 、b g Is a parameter to be learned, σ represents a sigmoid activation function, tanh represents a hyperbolic tangent function,representing the product of corresponding elements of the matrix or vector.
An activation function is a function that runs on a neuron, the primary function being to map the input of the neuron to the output. The activation function plays an important role in learning complex models (nonlinear models) by the neural network model, and introduces nonlinear characteristics into the neural network, which essentially corresponds to a single-layer network, regardless of the number of layers of the neural network, if no activation function is present. If an activation function is added at each layer, the multi-layer neural network can approximate any nonlinear function.
Common activation functions are logistic function (sigmoid), hyperbolic tangent function (tanh), rectified linear unit function (ReLU), etc., and the expressions are dividedThe method comprises the following steps:ReLu(x)=max(0,x)。
referring to fig. 1, the structure of an encoder and a decoder according to an embodiment of the present invention is shown in fig. 1, wherein the encoder adopts three layers of GRUs and is mainly responsible for encoding an input sequence into a context vector C with a fixed length. The context vector may be obtained by directly adopting the final output hidden state of the encoder, or may be obtained by changing the output hidden state at all times to be used as the context vector. The hidden state of the final moment is directly used here.
The context vector C and the input context vector C' of the decoder are changed by a feed forward layer (feed forward) and Add & Norm layers, wherein the feed forward layer is a fully connected layer of two layers, the activation function of the first layer is ReLu, the activation function is not used by the second layer, and the corresponding formula is:
max(0,XW 1 +b 1 )W 2 +b 2 ,
wherein W is 1 、W 2 、b 1 、b 2 Is a parameter to be learned, X represents an input, and the dimension of the finally obtained output matrix is consistent with the input X.
The Add & Norm layer consists of two parts, namely Add and Norm, and the calculation formula is as follows:
BatchNorm(X+FeedForward(X)),
the Add is a residual connection, which is generally used for solving the problem of multi-layer network training, and can make the network focus on only the current difference part; norm Batch Normalization, which converts the input of each layer of neurons to the same mean variance, accelerates convergence.
The decoder also uses three layers of GRUs, the effect being mainly to decode the context vector C' to obtain an output sequence of indefinite length. Where the context vector C' serves as the initial hidden state of the decoder, the initial input of the decoder being the input of the last moment of the input sequence of the encoder. The output sequence is decoded step by a decoder, each decoding being advantageousUsing the output y of the previous moment t-1 Co-decoding with the hidden state of the previous moment, namely:
y t =f(y t-1 ,h t-1 )。
in addition, to prevent overfitting, dropout layers are added to each layer of the encoder and decoder. The Dropout layer is mainly used for randomly disabling the number of neurons of the layer, so that the diversity of the model is increased, and overfitting can be prevented. The layer can only be used when the model is trained, and Dropout is not used in the test model so as to ensure the accuracy of the model.
The encoder and decoder architecture has two main drawbacks, the first is that the encoder has to compress all the input information into a fixed length context vector C, which has to contain all the information of the input data, which can be lost when the input sequence is too long. The second is that such a structure does not model the correspondence between the input sequence and the output sequence. The addition of the attention model allows the decoder to access all of the encoder-generated outputs, thus overcoming two major drawbacks of encoder-decoder.
In learning of the neural network, the more complex the general model is, the more parameters are correspondingly, the more learning ability of the model is, but the problem of information overload is caused. By introducing an attention mechanism, focusing on information which is more critical to solving the current task in a plurality of input information and reducing the attention to other information, the information overload problem can be effectively solved, and the efficiency and the accuracy of task processing are provided.
A time attention module is added between the encoder and the decoder. The attention module is mainly divided into two steps, first of all the attention distribution needs to be calculated on all the input information, and then the weighted average of the input information is calculated according to the attention distribution.
The calculated attention distribution formula is as follows:
wherein,,representing an attention scoring function; x is x i An output representing the i-th time of the encoder; h is a t Indicating the hidden state of the decoder at the i-th moment; w (W) x 、W h And b and V are parameters to be learned.
And finally, summarizing the input information by adopting a weighted average calculation mode to obtain an attribute value:
referring to fig. 2, the overall process of calculating the Attention value is shown in fig. 2, and the Attention value is calculated and added to the input information of the decoder at the moment to obtain an output sequence by coacting with the output sequence at the last moment.
Referring to fig. 3, in the embodiment of the present invention, the hypersonic gliding style aircraft model algorithm specifically includes the following steps:
1) The input data is normalized, so that the input data is changed in a small range, and the training speed is increased, so that the same initial weight can be given to different characteristics, and the model is easier to learn the relation between the characteristics;
2) Inputting the processed data into an input layer network, wherein the main function of the layer is to learn the relation between different characteristics and expand the dimension of the input characteristics;
3) The data coming out of the input layer is directly sent into a Seq2Seq network with an attention mechanism to obtain an output sequence;
4) The sequence output by the Seq2Seq is sent to an output layer to obtain a final predicted sequence, and the main function of the layer is to adjust the dimension of the output sequence.
Wherein the input layer, encoder decoder and output layer are combined into the deep htp model.
In the embodiment of the invention, the optimization algorithm and the evaluation index comprise: the optimization algorithm refers to a method for adjusting model parameters in the process of learning a model, and aiming at a neural network model, the optimization method adopted nowadays is a gradient-descent-based method, mainly including random gradient descent (Stochastic Gradient Descent, SGD), momentum method (Momentum), adaptive gradient descent algorithm (adaGrad), adaDelta method, adaptive Momentum method (Adam) and the like. Adam is an adaptive learning method, which dynamically adjusts the learning rate of each parameter by using the first moment estimation and the second moment estimation of the gradient.
The evaluation index refers to an index for evaluating a model training program, and is also an index for evaluating the quality of a model. The invention uses root mean square error (Root Mean Square Error, RMSE), average track prediction point error (ADE), absolute displacement error (FDE) and R2 to determine coefficients. And calculating the average Euclidean distance between the predicted track and the real track of each time point in the prediction period by using the average track prediction point error. And calculating the average Euclidean distance between the predicted track and the real track of the last time point of the predicted period by the absolute displacement error. The smaller the error value, the closer the predicted trajectory is to the real trajectory. The R2 decision coefficient is used for measuring the model fitting goodness, and the larger the value is, the better the fitting is. The formula is as follows:
wherein l i Representing a sequence of real trajectories;representing a predicted trajectory sequence; />Representing an average value of the real track; n represents the number of samples; t (T) pred Representing a predicted sample time length; />Representing three-dimensional components of the sequence of real trajectories; />Representing the three-dimensional component of the predicted trajectory sequence.
Referring to fig. 4, in the embodiment of the present invention, when the decoder decodes, each input is the output decoded at the previous time, and if the first track point at the decoding is wrong, the whole sequence is deviated. To this end, a Teacher strategy is added, in which case the decoder input no longer uses the decoded output from the previous time, but uses the true value output from the previous time. In fig. 4, y1_hat is a predicted value, and y1 is a true value.
Thus, the error at the previous moment can be prevented from being propagated to the current moment, and the parameter convergence can be quickened. Since the method of the invention uses true values during decoding, it can only be used during the training model phase and it can not be used during the test phase.
Experimental design and result analysis in the embodiment of the invention:
the embodiment of the invention carries out the simulation of the Seq2Seq model based on the attention mechanism, and mainly verifies the effectiveness of the method in hypersonic gliding type aircraft track prediction. The simulation environment is as follows: intel (R) Core (TM) i7-9500 CPU processor, 16.0GB memory, windows10 system 64 bits. The simulation platform is Pycharm, and the deep learning framework uses TensorFlow2.4.
The selecting and processing of the data set comprises the following steps: the data used by the embodiment of the invention is mainly the flight track of the hypersonic gliding aircraft in the jumping gliding mode. Jumping gliding is originally from hypersonic aircraft silver birds which are conceived by German scientists Eujin-Sanger, and is first boosted to a certain altitude by a booster and then decayed and oscillated to glide to the earth surface for a long period of time. The flight path is shown in figures 5 to 7.
Trajectory data of a hypersonic gliding aircraft gliding segment 1500s are selected, wherein each trajectory point comprises 7-dimensional features, namely a three-dimensional position, a three-dimensional speed and a ground center distance. The experiment was performed in the form of sliding windows, each window having a size of 300s (200 s input, 100s output) and sliding in seconds for a total of 1200 windows. The ratio of training set to test set was 8:2, with 960 training ballistic data and 240 test ballistic data. And normalizing all the tracks according to a front formula before training, and performing inverse normalization reduction on the training result.
The deep htp model parameters are shown in table 1 when the parameters are set. To prevent overfitting during training, a Dropout mechanism is added between the layers of the encoder and decoder, i.e., how many neurons of the layer are disabled randomly during training (i.e., the output of the neurons is set directly to 0), which is set to 0.8, i.e., 20% of the neurons are disabled randomly, which can only be used during training the model, and cannot be used during testing, i.e., which is set to 1.
TABLE 1 model parameters
Parameter type | Attribute/number |
Input vector dimension | 7 |
Number of input layer nodes | 36 |
Encoder/decoder |
3 |
|
50 |
Output vector dimension | 4 |
Dropout | 0.8 |
Number of training samplesbatch_size | 128 |
Initial learning rate | 0.01 |
Loss function | MSE |
Optimizer | Adam |
According to the parameter setting network and the deep HTP model, a training set training network is used, after model training is completed, the model is tested on a test set, a batch of test set samples are tested in millisecond level when being used in total, the evaluation index values of the model on the test set are shown in table 2, the RMSE is found to be in kilometer level, the FDE is found to be larger and reaches 10 kilometer level, the track prediction error is increased along with time, R2_square is a coefficient smaller than 1, the fitting effect is better as the R2_square is closer to 1, the result of the experiment reaches 0.973, and the fitting effect is better. The simulation results are shown in fig. 8 to 10, the errors in all directions can be found to be kilometer levels, the predicted track is very close to the actual track, the predicted result can be found to be very good in learning the movement trend in the ground center distance graph, and the errors are small.
TABLE 2 model predictive evaluation index
Index (I) | RMSE | ADE | FDE | R2_square |
Value of | 3.887 | 6.451 | 12.468 | 0.973 |
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, one skilled in the art may make modifications and equivalents to the specific embodiments of the present invention, and any modifications and equivalents not departing from the spirit and scope of the present invention are within the scope of the claims of the present invention.
Claims (8)
1. A method for trajectory prediction of a hypersonic aircraft, comprising the steps of:
acquiring a historical track sequence of the hypersonic aircraft to be subjected to track prediction, and carrying out normalization processing to obtain a normalized historical track sequence; the history track sequence comprises feature vectors at a plurality of preset moments, and the feature vector at each moment comprises a plurality of preset features;
inputting the normalized historical track sequence into a pre-trained track prediction model to obtain a model output result;
performing inverse normalization processing on the model output result to obtain a final track prediction result;
wherein the trajectory prediction model comprises:
the input layer is used for inputting the normalized historical track sequence, carrying out feature dimension expansion on the feature vector at each moment in the historical track sequence, and learning the relation among different features to obtain a dimension-expanded learned historical track sequence;
a codec network, comprising: an encoder, an attention module, and a decoder;
the encoder is used for inputting the history track sequence after the dimension expansion learning and extracting time dimension characteristics to obtain a context vector;
the attention module is used for obtaining the output of each moment of the encoder and carrying out weighted average to obtain an attention value;
the decoder is used for inputting the context vector and the attention value, and decoding the context vector by combining the attention value to obtain an initial model prediction result;
the output layer is used for carrying out dimension adjustment on the initial model prediction result according to a preset dimension output requirement to obtain a model output result;
the step of obtaining the pre-trained track prediction model specifically comprises the following steps:
acquiring historical track sequences of a plurality of hypersonic aircrafts and carrying out normalization processing to form a training sample set;
training the track prediction model based on a gradient descent method by adopting the training sample set to obtain a trained track prediction model;
wherein, dropout layers are added in each layer of the encoder and the decoder, and Dropout is used during training;
during training, a Teacher forming strategy is added when the decoder decodes, and the true value output at the last moment is used as the input of the decoder;
in the coding and decoding network, a feedforward layer and an Add & Norm layer are also arranged between a context vector C output by the coder and a context vector C' input by the decoder; the feedforward layer is a two-layer full-connection layer, the activation function of the first layer is ReLu, and the second layer does not use the activation function; the Add & Norm layer comprises two parts, namely Add and Norm; where Add is the residual linkage, norm refers to Batch Normalization.
2. The method for predicting the trajectory of a hypersonic vehicle according to claim 1 wherein the expression of the normalization process is:
wherein X is mean X is the mean value in the sample data std Is the standard deviation of sample data, X is the original data, X scaler Is normalized data.
3. The method for predicting the trajectory of a hypersonic vehicle according to claim 2 wherein the inverse normalization process is expressed as:
Y=Y scaler ·X std +X mean ,
wherein Y is scaler And representing the model output result, wherein Y is the final track prediction result.
4. The method of claim 1, wherein the encoder and decoder each use a three-layer gated loop unit.
5. The method for predicting the trajectory of a hypersonic aircraft according to claim 1 wherein the expression for calculating the attention profile in the attention module is:
in the method, in the process of the invention,representing an attention scoring function; x is x i An output representing the i-th time of the encoder; h is a t Indicating the hidden state of the decoder at the i-th moment; w (W) x 、W h B, V are parameters to be learned;
the expression for calculating the obtained attention value is:
6. a trajectory prediction system for a hypersonic aircraft, comprising:
the preprocessing module is used for acquiring a history track sequence of the hypersonic aircraft to be subjected to track prediction and carrying out normalization processing to acquire a history track sequence after normalization processing; the history track sequence comprises feature vectors at a plurality of preset moments, and the feature vector at each moment comprises a plurality of preset features;
the model output result acquisition module is used for inputting the normalized historical track sequence into a pre-trained track prediction model to obtain a model output result;
the track prediction result acquisition module is used for carrying out inverse normalization processing on the model output result to obtain a final track prediction result;
wherein the trajectory prediction model comprises:
the input layer is used for inputting the normalized historical track sequence, carrying out feature dimension expansion on the feature vector at each moment in the historical track sequence, and learning the relation among different features to obtain a dimension-expanded learned historical track sequence;
a codec network, comprising: an encoder, an attention module, and a decoder;
the encoder is used for inputting the history track sequence after the dimension expansion learning and extracting time dimension characteristics to obtain a context vector;
the attention module is used for obtaining the output of each moment of the encoder and carrying out weighted average to obtain an attention value;
the decoder is used for inputting the context vector and the attention value, and decoding the context vector by combining the attention value to obtain an initial model prediction result;
the output layer is used for carrying out dimension adjustment on the initial model prediction result to obtain a model output result;
the step of obtaining the pre-trained track prediction model specifically comprises the following steps:
acquiring historical track sequences of a plurality of hypersonic aircrafts and carrying out normalization processing to form a training sample set;
training the track prediction model based on a gradient descent method by adopting the training sample set to obtain a trained track prediction model;
wherein, dropout layers are added in each layer of the encoder and the decoder, and Dropout is used during training;
during training, a Teacher forming strategy is added when the decoder decodes, and the true value output at the last moment is used as the input of the decoder;
in the coding and decoding network, a feedforward layer and an Add & Norm layer are also arranged between a context vector C output by the coder and a context vector C' input by the decoder; the feedforward layer is a two-layer full-connection layer, the activation function of the first layer is ReLu, and the second layer does not use the activation function; the Add & Norm layer comprises two parts, namely Add and Norm; where Add is the residual linkage, norm refers to Batch Normalization.
7. An electronic device, comprising: a processor; a memory for storing computer program instructions; it is characterized in that the method comprises the steps of,
the computer program instructions, when loaded and executed by the processor, perform the trajectory prediction method of a hypersonic aircraft according to any one of claims 1 to 5.
8. A computer-readable storage medium storing computer program instructions, characterized in that the computer program instructions, when loaded and executed by a processor, perform the trajectory prediction method of a hypersonic flight vehicle according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110603010.4A CN113269363B (en) | 2021-05-31 | 2021-05-31 | Trajectory prediction method, system, equipment and medium of hypersonic aircraft |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110603010.4A CN113269363B (en) | 2021-05-31 | 2021-05-31 | Trajectory prediction method, system, equipment and medium of hypersonic aircraft |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113269363A CN113269363A (en) | 2021-08-17 |
CN113269363B true CN113269363B (en) | 2023-06-30 |
Family
ID=77233694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110603010.4A Active CN113269363B (en) | 2021-05-31 | 2021-05-31 | Trajectory prediction method, system, equipment and medium of hypersonic aircraft |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113269363B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114048889B (en) * | 2021-10-08 | 2022-09-06 | 天津大学 | Aircraft trajectory prediction method based on long-term and short-term memory network |
CN114022847B (en) * | 2021-11-23 | 2024-07-05 | 清华大学 | Method, system, equipment and storage medium for predicting intelligent body track |
CN114239935B (en) * | 2021-12-06 | 2024-09-06 | 中国电子科技集团公司第十五研究所 | Prediction method for non-uniform track sequence |
CN114387313A (en) * | 2022-01-07 | 2022-04-22 | 武汉东信同邦信息技术有限公司 | Motion trajectory prediction method, device, equipment and storage medium |
CN115047894B (en) * | 2022-04-14 | 2023-09-15 | 中国民用航空总局第二研究所 | Unmanned aerial vehicle track measuring and calculating method, electronic equipment and storage medium |
CN114740894B (en) * | 2022-05-13 | 2022-08-26 | 北京航空航天大学 | Aircraft guidance method and system based on attention mechanism and gated cycle unit |
CN115169233B (en) * | 2022-07-15 | 2023-03-24 | 中国人民解放军32804部队 | Hypersonic aircraft uncertain trajectory prediction method based on depth Gaussian process |
CN116361662B (en) * | 2023-05-31 | 2023-08-15 | 中诚华隆计算机技术有限公司 | Training method of machine learning model and performance prediction method of quantum network equipment |
CN116956647B (en) * | 2023-09-20 | 2023-12-19 | 成都流体动力创新中心 | Pneumatic data fusion method and system |
CN117076893B (en) * | 2023-10-16 | 2024-01-09 | 中国海洋大学 | Sound velocity distribution forecasting method based on long-term and short-term memory neural network |
CN117786534B (en) * | 2024-01-22 | 2024-07-12 | 哈尔滨工业大学 | Hypersonic aircraft motion behavior recognition method and system based on deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111783307A (en) * | 2020-07-07 | 2020-10-16 | 哈尔滨工业大学 | Hypersonic aircraft state estimation method |
CN111931287A (en) * | 2020-07-06 | 2020-11-13 | 北京电子工程总体研究所 | Near space hypersonic target trajectory prediction method |
JP6873519B1 (en) * | 2020-04-24 | 2021-05-19 | 中国人民解放軍国防科技大学 | Trajectory prediction method and system |
CN112859898A (en) * | 2021-01-18 | 2021-05-28 | 中山大学 | Aircraft trajectory prediction method based on two-channel bidirectional neural network |
-
2021
- 2021-05-31 CN CN202110603010.4A patent/CN113269363B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6873519B1 (en) * | 2020-04-24 | 2021-05-19 | 中国人民解放軍国防科技大学 | Trajectory prediction method and system |
CN111931287A (en) * | 2020-07-06 | 2020-11-13 | 北京电子工程总体研究所 | Near space hypersonic target trajectory prediction method |
CN111783307A (en) * | 2020-07-07 | 2020-10-16 | 哈尔滨工业大学 | Hypersonic aircraft state estimation method |
CN112859898A (en) * | 2021-01-18 | 2021-05-28 | 中山大学 | Aircraft trajectory prediction method based on two-channel bidirectional neural network |
Non-Patent Citations (2)
Title |
---|
Li Fan ; Xiong Jiajun ; Lan Xuhui ; Bi Hongkui ; Chen Xin.NSHV trajectory prediction algorithm based on aerodynamic acceleration EMD decomposition.《Journal of Systems Engineering and Electronics》.2021,摘要. * |
基于改进Seq2Seq的短时AIS轨迹序列预测模型;游兰;韩雪薇;何正伟;肖丝雨;何渡;潘筱萌;;计算机科学(09);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113269363A (en) | 2021-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113269363B (en) | Trajectory prediction method, system, equipment and medium of hypersonic aircraft | |
CN109800294B (en) | Autonomous evolution intelligent dialogue method, system and device based on physical environment game | |
CN112766561B (en) | Attention mechanism-based generation type countermeasure track prediction method | |
Leibfried et al. | A deep learning approach for joint video frame and reward prediction in atari games | |
CN113110592A (en) | Unmanned aerial vehicle obstacle avoidance and path planning method | |
CN112859898B (en) | Aircraft trajectory prediction method based on two-channel bidirectional neural network | |
CN113033118B (en) | Autonomous floating control method of underwater vehicle based on demonstration data reinforcement learning technology | |
CN113887789B (en) | Improved ship track prediction method and device based on cyclic neural network | |
CN118378054B (en) | Real-time reliability assessment system and method for submarine-launched unmanned aerial vehicle | |
CN114166509A (en) | Motor bearing fault prediction method | |
CN114298183B (en) | Intelligent recognition method for flight actions | |
CN117174163A (en) | Virus evolution trend prediction method and system | |
CN113406957B (en) | Mobile robot autonomous navigation method based on immune deep reinforcement learning | |
CN116736729B (en) | Method for generating perception error-resistant maneuvering strategy of air combat in line of sight | |
CN116432539A (en) | Time consistency collaborative guidance method, system, equipment and medium | |
Bartusiak et al. | Predicting Hypersonic Glide Vehicle Behavior With Stochastic Grammars | |
CN113300884B (en) | GWO-SVR-based step-by-step network flow prediction method | |
CN115453880A (en) | Training method of generative model for state prediction based on antagonistic neural network | |
Zhang et al. | Learning efficient sparse structures in speech recognition | |
CN117556681B (en) | Intelligent air combat decision method, system and electronic equipment | |
Shao et al. | A novel recurrent convolutional neural network-based estimation method for switching guidance law | |
CN112734039B (en) | Virtual confrontation training method, device and equipment for deep neural network | |
McKenna et al. | Online Parameter Estimation Within Trajectory Optimization for Dynamic Soaring | |
CN114970714B (en) | Track prediction method and system considering uncertain behavior mode of moving target | |
Feng et al. | On close-range air combat based on hidden Markov model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |