CN115730743A - Battlefield combat trend prediction method based on deep neural network - Google Patents

Battlefield combat trend prediction method based on deep neural network Download PDF

Info

Publication number
CN115730743A
CN115730743A CN202211548950.9A CN202211548950A CN115730743A CN 115730743 A CN115730743 A CN 115730743A CN 202211548950 A CN202211548950 A CN 202211548950A CN 115730743 A CN115730743 A CN 115730743A
Authority
CN
China
Prior art keywords
neural network
battlefield
model
recurrent neural
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211548950.9A
Other languages
Chinese (zh)
Inventor
郑晓军
郭星泽
童小英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Jiaotong University
Original Assignee
Dalian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Jiaotong University filed Critical Dalian Jiaotong University
Priority to CN202211548950.9A priority Critical patent/CN115730743A/en
Publication of CN115730743A publication Critical patent/CN115730743A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a battlefield combat trend prediction method based on a deep neural network, which comprises the following steps of: s1: constructing a battlefield target position prediction model; s2: constructing an RNN recurrent neural network model; s3: initializing an RNN recurrent neural network model; s4: setting a loss function to train and optimize an RNN (neural network) recurrent neural network model; s5: determining optimal parameters of a battlefield target position prediction model; according to the method, a battlefield target position prediction model is built, the position of an enemy target at each moment in a battlefield is predicted, a construction method of a deep neural network model is provided on the basis, iterative training and verification are carried out on the model for a plurality of times through enemy target historical position data, then the trained model is verified through verification data, the accuracy of the trained model is judged, the battlefield situation requirements which change rapidly can be responded in time, and the timeliness of the enemy target operational trend information prediction of our party is improved.

Description

Battlefield combat trend prediction method based on deep neural network
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a battlefield combat trend prediction method based on a deep neural network.
Background
The countermeasure means of interference and cheating of both the enemy and the my party in the informatization defense battlefield is continuously upgraded, and the adversary generates a large amount of data containing false information through various interference means and cheating tactical behaviors, so that the aims of cheating a commander and hiding the real fighting intention are fulfilled; the operation trend prediction is realized by cross reproduction and fusion processing of data information, and a more real and more accurate information processing result is provided for a commander.
When the existing battlefield trend prediction method faces complex battlefield environment, the processing time consumption is long, the workload is large, the prediction efficiency is poor, the time consumption is long, and the requirement on the modern battlefield trend prediction is difficult to meet.
Disclosure of Invention
The invention provides a battlefield combat trend prediction method based on a deep neural network, aiming at solving the problems that the existing combat trend prediction method is long in processing time consumption, poor in prediction efficiency, difficult to meet the requirement of modern battlefield trend prediction and the like.
The technical scheme adopted by the invention for realizing the purpose is as follows: a battlefield combat trend prediction method based on a deep neural network comprises the following steps:
s1: constructing a battlefield target position prediction model, wherein the battlefield target position prediction model comprises an input layer, an RNN recurrent neural network model and an output layer;
s2: constructing an RNN recurrent neural network model;
s3: initializing an RNN recurrent neural network model;
s4: setting a loss function to train and optimize an RNN (neural network) recurrent neural network model;
s5: and determining optimal parameters of a battlefield target position prediction model.
Preferably, in S1, the input layer is each target position sequence file, and the output layer is each target position at the next time.
Preferably, in S2, the RNN recurrent neural network model is formed by sequentially connecting feed-forward neurons shared by weights, at time t, the input of the hidden layer is formed by the latest data of the input layer at the current time and the output of the hidden layer at time t-1, the neurons complete mapping of the input and the output to obtain the output result at the current time, and the shared network parameters are updated and optimized, and all input data before time t still have influence on the network in a memory manner, thereby forming a feedback network structure.
Preferably, in S3, the initializing the weight matrix parameters in the recurrent neural network model to the standard deviation of
Figure BDA0003981307150000021
A gaussian random distribution function with a mean value of 0.
Preferably, in S3, after initializing the weight of the parameter, the basic structure definition expression of the RNN recurrent neural network model is as follows:
x t ∈R d →x t ∈R 1000
h t ∈R d →h t ∈R 1000
Figure BDA0003981307150000022
Figure BDA0003981307150000023
Figure BDA0003981307150000024
Figure BDA0003981307150000025
in the formula: x is a radical of a fluorine atom t Is the time step, d is the vector length, s t For hiding layer states in time step, D h Is the number of neurons, U, W, V is the weight matrix, R is the matrix, h is t The target intention prediction result at time t is shown as an output at time t.
Preferably, in S4, the process of training the optimized RNN recurrent neural network model is as follows:
s4-1: determining that the input dimension of the network is 2 dimensions and the output dimension is 2 dimensions;
s4-2: respectively distributing the data to a test data set and a check data set according to the proportion of 2 to 8;
s4-3: preparing data, wherein the size of the prepared data is not less than the sequence length and the number of data samples captured by one-time training, and processing the data through normalization coding.
S4-4: initializing basic parameters and a network model;
s4-5: the neural network model is optimized using a loss function.
Preferably, in S4, the loss function is a cross entropy function, and the expression is as follows:
Figure BDA0003981307150000026
in the formula: y is the value of the true output value,
Figure BDA0003981307150000027
is the output value of the neural network prediction, x is the sample prediction probability loss value;
let L be the loss function at time step t in the recurrent neural network t Then the expression is as follows:
Figure BDA0003981307150000028
if the output time series number is T, the total loss function of the recurrent neural network model is expressed as follows:
Figure BDA0003981307150000029
preferably, in S4, the RNN recurrent neural network model is trained and optimized by using a back propagation algorithm and a stochastic gradient descent algorithm.
Preferably, in S5, the optimal parameters of the battlefield target position prediction model are as follows: the sequence length is 25; the learning rate is 0.01; the RNN type is a GRU network; the output layer activation function is tanh; the number of hidden units is 20; the number of samples was 25; the number of training times is 100; the optimizer type is momentum.
The method comprises the steps of constructing a battlefield target position prediction model, inputting detected position information during use, outputting the position of an enemy target at the next moment, predicting the position of the enemy target at each moment in the battlefield, analyzing input and output of feature extraction, providing a construction method of a deep neural network model on the basis, carrying out iterative training and verification on the model for a plurality of times through historical position data of the enemy target, verifying the trained model through verification data and judging the accuracy of the trained model, responding to rapidly-changing battlefield situation demands in time, improving the timeliness of the enemy target operational trend information prediction, and providing a new solution for the modern battlefield operational trend prediction.
Drawings
FIG. 1 is a diagram of a battlefield target position prediction model of the present invention;
FIG. 2 is a diagram of an RNN recurrent neural network model architecture of the present invention;
FIG. 3 is a flow chart of the present invention for training an optimized RNN recurrent neural network model;
FIG. 4 is a schematic illustration of the present invention predicting the location of an enemy target;
FIG. 5 is a diagram illustrating the prediction of an enemy target trend in accordance with the present invention.
Detailed Description
The invention discloses a battlefield combat trend prediction method based on a deep neural network, which comprises the following steps of:
s1: a battlefield target position prediction model is constructed, and as shown in figure 1, the battlefield target position prediction model comprises an input layer, an RNN recurrent neural network model and an output layer, wherein the input layer is a sequence file of each target position, and the output layer is each target position at the next moment.
S2: an RNN recurrent neural network model is constructed, and shown in figure 2, the expanded neurons are connected in sequence by feedforward neurons shared by weights. At the time t, the input of the hidden layer is composed of the latest data of the input layer at the current time and the output of the hidden layer at the time t-1, the neuron finishes the mapping of the input and the output to obtain the result which should be output at the current time, the shared network parameters are updated and optimized, all input data before the time t still have memory to influence the network, and a feedback network structure is formed, wherein the meaning of each variable in the diagram is as follows:
x t input for time t, such as information of target speed, distance, altitude, orientation, type and the like in target intention prediction;
s t the hidden state at the time t is obtained based on the hidden state at the last time and the current input, and the expression is as follows:
s t =f(s t-1 ,x t )=f(W T X t +U T S t-1 )
in the formula: f is generally a non-linear activation function; x is a vector and W is a weight matrix, and X is converted into another vector through W; s t-1 Is a vector, U is a weight matrix, and S is derived from U t-1 Conversion to a further vector; in calculating S 0 I.e. the first hidden layer state, S is needed t-1 However, it is not present and is generally set to 0.
h t For the output at time t, representing the target intention prediction result at time t, i.e., the predicted target coordinates, the output sequence expression is as follows:
H=[h 1 ,,h t-1 ,h t ,h t+1 ,,h T ]
in the formula: h is t =g(Vs t ) G is an activation function and V is a weight matrix。
W, U, V is a network parameter of the RNN recurrent neural network, W is a state transition matrix, i.e. a weight parameter matrix of the hidden layer state S; u is a conditional probability matrix, namely a weight parameter matrix of the input sequence information X; v is a weight parameter matrix of the output sequence information H.
S3: initializing RNN recurrent neural network model, and when the parameters of network weight are initialized to standard deviation
Figure BDA0003981307150000041
The training effect is best when the Gauss are randomly distributed, so the weight matrix U, W, V in the recurrent neural network model uses the standard deviation
Figure BDA0003981307150000042
Initializing each weight parameter by a Gaussian random distribution function with the average value of 0; initializing the parameter weight of the recurrent neural network model: suppose at time step x t Has a vector length d corresponding to the hidden layer state s in the time step t Number of neurons D h Then, the basic structure is defined as follows according to the neural network model:
x t ∈R d →x t ∈R 1000
h t ∈R d →h t ∈R 1000
Figure BDA0003981307150000043
Figure BDA0003981307150000044
Figure BDA0003981307150000045
Figure BDA0003981307150000046
in the formula: x is the number of t Is the time step, d is the vector length, s t For hiding layer states in time step, D h Is the neuron number, U, W, V is the weight matrix, R is the matrix, h t Is the output of the time t and represents the target intention prediction result at the time t; the sizes of the parameter matrixes of the input layer, the hidden layer and the output layer in the recurrent neural network model are determined, and the sizes of the weight matrixes U, W and V in the network are also determined, so that the parameter which is needed for executing the recurrent neural network is probably known.
The sequence data is used as the input of the recurrent neural network model, and is propagated forwards according to the sequence order, and the hidden layer state s needs to be calculated once for each time sequence t And an output h t The value of (c). Starting from the first sequence t =0, S is calculated 0 And an output h 0 (ii) a Then, the hidden layer state S of the second time series t =1 and the first time series is calculated 0 And sequence data h 0 As network inputs, outputs s 1 And h 1 . And so on until the last time series T.
S4: setting a loss function to train an optimized RNN recurrent neural network model, as shown in FIG. 3, the process of training the optimized RNN recurrent neural network model is as follows:
s4-1: determining the network input dimension to be 2-dimensional (i.e., target X and Y coordinates) and the output dimension to be 2-dimensional (i.e., predicted target next location X and Y);
s4-2: respectively distributing the data to a test data set and a check data set according to the proportion of 2 to 8;
s4-3: preparing data, wherein the Size of the data is not less than the sequence Length (Seq _ Length) and the number of data samples (Batch _ Size) captured by one training, and processing the data by normalized coding to change the data into data between-1 and 1;
s4-4: initializing basic parameters and constructing a network model, namely, operating in S3;
s4-5: optimizing the neural network model by using a loss function, and specifically comprising the following steps of:
(1) Setting a loss function: in order to train the recurrent neural network model, a method for measuring the error generated by the recurrent neural network model output is required, and the method is also a Loss Function (Loss Function). The common loss function of the recurrent neural network model is a cross entropy function, and the expression is as follows:
Figure BDA0003981307150000051
in the formula: y is the true output value of the digital signal,
Figure BDA0003981307150000052
is the output value of the neural network prediction, and x is the sample prediction probability loss value;
the objective of the loss function is to optimize the weight parameter matrix U, W, V in the recurrent neural network so that the output value of the input sequence data after passing through the recurrent neural network is closer to the real output value, i.e. it is desired to find a set of parameters, y and y, that can minimize the loss function given the training data
Figure BDA0003981307150000053
The larger the difference between the two, the larger the error of the network model is, and the larger the loss thereof is.
Let L be the loss function at time step t in the recurrent neural network t Then the expression is as follows:
Figure BDA0003981307150000054
if the output time series number is T, the total loss function of the recurrent neural network model is expressed as follows:
Figure BDA0003981307150000055
(2) The difference between the predicted output value and the true output value of the network needs to be compared in the neural network model: optimizing the loss function by using a random gradient descent (SGD) algorithm, updating network parameters, and deriving the loss function by using a back propagation algorithm to obtain the gradients of all parameters in the network; the back-propagation algorithm is a common method used in conjunction with an optimization method (e.g., gradient descent) to train an artificial neural network, which computes the gradient of the loss function for all weights in the network, which is fed back to the optimization method to update the weights to minimize the loss function. The back propagation algorithm refers only to a method for calculating a gradient, and another algorithm, such as a random gradient descent method, uses the gradient for learning.
In the iteration process, the network parameters are gently pushed to change towards the direction of error reduction through the learning rate, so that the weight parameter matrix U, W, V in the recurrent neural network model is updated, namely, the method for determining the weight parameters for the recurrent neural network is adopted.
The weight parameters U, W, V in the recurrent neural network model are shared in all time steps of the network, so that the output gradient of each time step not only depends on the operation of the current time step t, but also includes the operation of the last time step t-1, and the derivatives of the weight parameters can be obtained by inputting the training samples (x, y) through a back propagation algorithm as follows:
Figure BDA0003981307150000061
the goal of the back propagation algorithm is to derive the derivative
Figure BDA0003981307150000062
Optimizing the network parameters U, W, V according to the rate of change of 3 weight parameters in the network, as follows:
Figure BDA0003981307150000063
therefore, only the partial derivative of the loss function at each time is needed to be calculated, the derivative of the loss function at the time with respect to the weight is obtained, and then the total derivative can be obtained by adding.
The back propagation algorithm considers the longitudinal propagation of the gradient between the upper layer and the lower layer, optimizes the parameters of the whole model, verifies the trained model by using the test data set, readjusts the parameters and puts the trained model into use.
S5: the step of determining the optimal parameters of the battlefield target position prediction model comprises the following steps: in the present invention, the parameters of the model for predicting the target activity trend are shown in table 1:
TABLE 1 target Activity Trend prediction model parameters
Figure BDA0003981307150000064
Firstly, determining the learning rate of a neural network model: when the learning rate is 0.001, 0.002 or 0.005, the calculation result is still acceptable, but the calculation time is too long and the efficiency is too low, so that when the learning rate is 0.01, the calculation efficiency is higher, the result is more stable, and the method is the best choice;
and then, one of the parameters is adjusted under the optimal parameter to ensure the reasonability and scientificity of the optimal parameter. The Seq _ Lenth parameter was fixed at 25, i.e., 25 data collected over four hours; the Epoch, that is, the number of iterations is fixed to 100; the other parameters are adjusted by the following sequence: hidden layer unit number, RNN type, output layer activation function, batch _ Size, i.e., sample number Size, optimizer type, optimizer is an algorithm or method for modifying neural network properties (e.g., weights and learning rate) to reduce losses; the learning rate. The principle of parameter adjustment is set to adjust the above parameters in sequence without changing other parameters.
After adjusting each parameter, the sequence length is found to be 25; the learning rate is 0.01; the RNN type is a GRU network; the output layer activation function is tanh; the number of hidden units is 20; the number of samples was 25; the number of training times is 100; the optimizer type is momentum, and is confirmed as the optimal parameter of the model.
After the parameters of each model are adjusted, the observed value of the enemy is input, the target trend of the enemy is predicted, and the prediction result is shown in fig. 4 and 5.
The invention constructs a battlefield target position prediction model, inputs the detected position information during use, outputs the position of an enemy target at the next moment, predicts the position of the enemy target at each moment in the battlefield, analyzes the input and output of characteristic extraction, provides a construction method of a deep neural network model on the basis, carries out iterative training and verification on the model for a plurality of times through historical position data of the enemy target, then verifies the trained model through verification data and judges the accuracy of the trained model, can respond to rapidly changing battlefield situation requirements in time, carries out deep learning and training of the network model through a large amount of data, finally realizes high-efficiency and high-precision battlefield trend prediction, improves the timeliness of the prediction of the enemy target operational trend information by our party, and provides a new solution for the modern battlefield operational trend prediction.
While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (9)

1. A battlefield combat trend prediction method based on a deep neural network is characterized by comprising the following steps:
s1: constructing a battlefield target position prediction model, wherein the battlefield target position prediction model comprises an input layer, an RNN recurrent neural network model and an output layer;
s2: constructing an RNN recurrent neural network model;
s3: initializing an RNN recurrent neural network model;
s4: setting a loss function to train and optimize an RNN (neural network) recurrent neural network model;
s5: and determining optimal parameters of a battlefield target position prediction model.
2. The method as claimed in claim 1, wherein in S1, the input layer is a sequence file of target positions, and the output layer is the target positions at the next moment.
3. The method as claimed in claim 1, wherein in S2, the RNN recurrent neural network model is formed by sequentially connecting feed-forward neurons sharing weights, at time t, the input of the hidden layer is formed by the latest data of the input layer at the current time and the output of the hidden layer at time t-1, the neurons complete the mapping of the input and the output to obtain the output result at the current time, the shared network parameters are updated and optimized, and all input data before time t still have influence on the network in a memory manner to form a feedback network structure.
4. The method as claimed in claim 1, wherein in S3, the weight matrix parameters in the recurrent neural network model are initialized to have standard deviation of
Figure FDA0003981307140000011
A gaussian random distribution function with a mean value of 0.
5. The method as claimed in claim 4, wherein in the step S3, after the parameter weight is initialized, the basic structure of the RNN recurrent neural network model defines the following expression:
x t ∈R d →x t ∈R 1000
h t ∈R d →h t ∈R 1000
Figure FDA0003981307140000012
Figure FDA0003981307140000013
Figure FDA0003981307140000014
Figure FDA0003981307140000015
in the formula: x is the number of t Is the time step, d is the vector length, s t For hiding layer states in time steps, D h Is the neuron number, U, W, V is the weight matrix, R is the matrix, h t The target intention prediction result at time t is shown as an output at time t.
6. The method for predicting the battlefield battle trend based on the deep neural network as claimed in claim 1, wherein in the step S4, the process of training the optimized RNN recurrent neural network model is as follows:
s4-1: determining that the input dimension of the network is 2 dimensions and the output dimension is 2 dimensions;
s4-2: respectively distributing the data to a test data set and a check data set according to the proportion of 2 to 8;
s4-3: preparing data, wherein the size of the prepared data is not less than the sequence length and the number of data samples grabbed by one-time training, and processing the data through normalization coding.
S4-4: initializing basic parameters and a network model;
s4-5: the neural network model is optimized using a loss function.
7. The method for predicting the battlefield battle trend based on the deep neural network as claimed in claim 6, wherein the loss function in S4-5 is a cross entropy function, and the expression is as follows:
Figure FDA0003981307140000021
in the formula: y is the value of the true output value,
Figure FDA0003981307140000022
is the output value of the neural network prediction, and x is the sample prediction probability loss value;
let L be the loss function at time step t in the recurrent neural network t Then the expression is as follows:
Figure FDA0003981307140000023
if the output time series number is T, the total loss function of the recurrent neural network model is expressed as follows:
Figure FDA0003981307140000024
8. the method as claimed in claim 6, wherein in S4-5, the RNN recurrent neural network model is trained and optimized by using a back propagation algorithm and a stochastic gradient descent algorithm.
9. The method as claimed in claim 1, wherein in S5, optimal parameters of the battlefield target position prediction model are as follows: the sequence length is 25; the learning rate is 0.01; the RNN type is a GRU network; the output layer activation function is tanh; the number of hidden units is 20; the number of samples was 25; the number of training times is 100; the optimizer type is momentum.
CN202211548950.9A 2022-12-05 2022-12-05 Battlefield combat trend prediction method based on deep neural network Pending CN115730743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211548950.9A CN115730743A (en) 2022-12-05 2022-12-05 Battlefield combat trend prediction method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211548950.9A CN115730743A (en) 2022-12-05 2022-12-05 Battlefield combat trend prediction method based on deep neural network

Publications (1)

Publication Number Publication Date
CN115730743A true CN115730743A (en) 2023-03-03

Family

ID=85300126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211548950.9A Pending CN115730743A (en) 2022-12-05 2022-12-05 Battlefield combat trend prediction method based on deep neural network

Country Status (1)

Country Link
CN (1) CN115730743A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116451584A (en) * 2023-04-23 2023-07-18 广东云湃科技有限责任公司 Thermal stress prediction method and system based on neural network
CN116523120A (en) * 2023-04-14 2023-08-01 成都飞机工业(集团)有限责任公司 Combat system health state prediction method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116523120A (en) * 2023-04-14 2023-08-01 成都飞机工业(集团)有限责任公司 Combat system health state prediction method
CN116451584A (en) * 2023-04-23 2023-07-18 广东云湃科技有限责任公司 Thermal stress prediction method and system based on neural network
CN116451584B (en) * 2023-04-23 2023-11-03 广东云湃科技有限责任公司 Thermal stress prediction method and system based on neural network

Similar Documents

Publication Publication Date Title
CN115730743A (en) Battlefield combat trend prediction method based on deep neural network
CN108549233B (en) Unmanned aerial vehicle air combat maneuver game method with intuitive fuzzy information
CN107272403A (en) A kind of PID controller parameter setting algorithm based on improvement particle cluster algorithm
CN110442129B (en) Control method and system for multi-agent formation
Hu et al. Improved Ant Colony Optimization for Weapon‐Target Assignment
CN113159266B (en) Air combat maneuver decision method based on sparrow searching neural network
CN112577507A (en) Electric vehicle path planning method based on Harris eagle optimization algorithm
CN113469891A (en) Neural network architecture searching method, training method and image completion method
CN115047907B (en) Air isomorphic formation command method based on multi-agent PPO algorithm
CN114281103B (en) Aircraft cluster collaborative search method with zero interaction communication
CN116341605A (en) Grey wolf algorithm hybrid optimization method based on reverse learning strategy
CN111832911A (en) Underwater combat effectiveness evaluation method based on neural network algorithm
CN114690623A (en) Intelligent agent efficient global exploration method and system for rapid convergence of value function
CN114662638A (en) Mobile robot path planning method based on improved artificial bee colony algorithm
Wang et al. Unmanned ground weapon target assignment based on deep Q-learning network with an improved multi-objective artificial bee colony algorithm
CN115272774A (en) Sample attack resisting method and system based on improved self-adaptive differential evolution algorithm
CN109886405A (en) It is a kind of inhibit noise based on artificial neural network structure's optimization method
Cao et al. Autonomous maneuver decision of UCAV air combat based on double deep Q network algorithm and stochastic game theory
CN116933948A (en) Prediction method and system based on improved seagull algorithm and back propagation neural network
CN110851911A (en) Terminal state calculation model training method, control sequence searching method and device
KR20080052940A (en) Method for controlling game character
CN116050515B (en) XGBoost-based parallel deduction multi-branch situation prediction method
CN116432539A (en) Time consistency collaborative guidance method, system, equipment and medium
CN116165886A (en) Multi-sensor intelligent cooperative control method, device, equipment and medium
Asaduzzaman et al. Faster training using fusion of activation functions for feed forward neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination