CN116993010B - Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network - Google Patents

Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network Download PDF

Info

Publication number
CN116993010B
CN116993010B CN202310944321.6A CN202310944321A CN116993010B CN 116993010 B CN116993010 B CN 116993010B CN 202310944321 A CN202310944321 A CN 202310944321A CN 116993010 B CN116993010 B CN 116993010B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
data
neural network
num
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310944321.6A
Other languages
Chinese (zh)
Other versions
CN116993010A (en
Inventor
李富超
程赟
袁银龙
李俊红
华亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202310944321.6A priority Critical patent/CN116993010B/en
Publication of CN116993010A publication Critical patent/CN116993010A/en
Application granted granted Critical
Publication of CN116993010B publication Critical patent/CN116993010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Operations Research (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network, belonging to the technical field of unmanned aerial vehicle situation prediction; the method solves the technical problem that the unmanned aerial vehicle cannot predict the uncertainty of the future situation of the unmanned aerial vehicle of the enemy under the air combat environment of the unmanned aerial vehicle. The technical proposal is as follows: a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network comprises the following steps: s1, establishing a Bayesian network suitable for time sequence prediction; s2, acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle; s3, predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network. The beneficial effects of the invention are as follows: the prediction method provided by the invention can enable the own unmanned aerial vehicle to predict the situation of the enemy unmanned aerial vehicle at the next moment, so that the own unmanned aerial vehicle can occupy the battlefield initiative, and the battlefield initiative of the unmanned aerial vehicle is improved.

Description

Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network
Technical Field
The invention relates to the technical field of unmanned aerial vehicle situation prediction, in particular to a fixed wing unmanned aerial vehicle situation prediction method based on a Bayesian neural network.
Background
With the rapid development of unmanned aerial vehicle technology, fixed wing unmanned aerial vehicle air combat has become an important component in modern war. In the combat environment, the situation information of the opposite unmanned aerial vehicle is obtained through prediction, which is important for specifying tactical decisions and implementing striking actions, so that the situation information of the enemy unmanned aerial vehicle is predicted, and not only is a high-precision sensor such as a radar, an infrared sensor, a camera and the like required, but also a mature and reliable prediction algorithm is required to be applied to complete the prediction by using data obtained from the sensor.
While analysis of situation information of an enemy unmanned aerial vehicle is required when the unmanned aerial vehicle faces a high-fight battlefield environment, current prediction based on a neural network is often too confident, for example, in the article Modeling Vehicle Interactions via Modified LSTM Models for Trajectory Prediction, LSTM is used as a neural network dedicated to time series to predict a vehicle track, and a lot of computing resources and time are required in the process of training and reasoning while excessively relying on historical data, and only single situation information can be predicted. Therefore, the state potential of the enemy unmanned aerial vehicle predicted by using the LSTM and other traditional neural networks cannot be objectively measured, the problem of over fitting is easy to occur, and the irrecoverable result is finally caused in the battlefield environment.
Disclosure of Invention
The invention aims to provide a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network, which can quickly predict situation information of an enemy unmanned aerial vehicle, gives uncertainty of prediction and is beneficial to reducing over fitting of the neural network.
The invention is characterized in that: firstly, establishing a Bayesian network suitable for time sequence prediction, acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the enemy unmanned aerial vehicle, and finally predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network.
The invention gives the unmanned aerial vehicle the ability to predict the future situation information of the enemy unmanned aerial vehicle and gives the uncertainty of the prediction result.
The invention is realized by the following measures: a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network comprises the following steps:
firstly, establishing a Bayesian network suitable for time sequence prediction, and storing trained network parameters and structures, so that real-time prediction can be performed after real-time situation information of an enemy unmanned aerial vehicle is collected in a later period conveniently;
acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle, and transmitting the situation information of the enemy to a Bayesian neural network after finishing, so that the neural network can predict quickly;
and thirdly, predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network, and finally transmitting the predicted situation information of the enemy unmanned aerial vehicle back to the unmanned aerial vehicle, so that the unmanned aerial vehicle can occupy the favorable position in advance at the future moment.
Further, the first step includes the following steps:
1-1) collecting situation information data of sufficient and mutually independent non-repeated enemy unmanned aerial vehicles, wherein the data shape is (tracks_num, time_long, features_num), wherein tracks_num represents the collected situation information quantity of the enemy unmanned aerial vehicles, time_long represents the time step of each piece of collected situation information data of the enemy unmanned aerial vehicles, and features_num represents the situation information feature quantity of the enemy unmanned aerial vehicles input into the network;
1-2) randomly selecting training data and test data for training the bayesian neural network from the data set collected in step 1-1) using a dual-loop function and a slicing operation;
1-2-1), selecting indexes idex for the first dimension tracks_num and the second dimension time_long of the data collected in the step 1-1) by using a double-loop function, wherein the total amount of the indexes is selected as batch_size epsilon (1000, tracks_num multiplied by time_long);
1-2-2), taking the index idex selected in the step 1-2-1) as the start end of a slice, selecting 6 continuous time steps backwards as one piece of data, selecting batch_size pieces of data altogether, wherein each piece of data is 6 in length, and each piece of data is marked as alpha;
1-2-3), the data selected in steps 1-2-1) and 1-2-2) are three-dimensional arrays in the shape of (batch_size, time_step, features_num), wherein batch_size=3000 is the total batch for training and testing, time_step=6 is the time step of single-segment unmanned plane situation information, and features_num represents the feature quantity of situation information of enemy unmanned plane;
1-2-4), selecting 80% of the batch_size as training set three_data after the collected data sets are out of order, wherein the shape is (2400, 6, features_num), and the remaining 20% is as test set test_data, and the shape is (800, 6, features_num);
1-2-5), for each data segment α in the training set train_data and the test set test_data, the first 5 seconds are sliced out as data values, the 6 th second is taken as a label, and 4 data sets are obtained at this time respectively:
input data input_train for training, shape (2400, 5, features_num);
the label data set output_train corresponding to the training data set is in the shape of (2400, 1, features_num);
input data input_test for test, shape (800, 5, features_num);
the label data set output_test corresponding to the test data set has a shape of (800, 1, features_num).
1-3), the third dimension of the data set obtained in step 1-2-3) having a number of features_num of 6, denoted S a = (x, y, z, v, ψ, γ) wherein:
x is the pixel coordinate value of the unmanned aerial vehicle on the x axis, and the calculation formula of x is as follows:
x=vcosγsinψ (1)
y is the pixel coordinate value of the unmanned plane on the y axis, and the calculation formula of y is as follows:
y=vcosγcosψ (2)
z is the pixel coordinate value of the unmanned plane on the z axis, and the calculation formula of z is as follows:
z=vsinγ (3)
v is the speed of the unmanned aerial vehicle, and the calculation formula of v is:
v=g(n x -sinγ) (4)
and the calculation formula of the psi is as follows:
gamma is the track angle of the unmanned aerial vehicle, and the calculation formula of gamma is:
n x is unmanned aerial vehicle tangential overload, n z The normal overload of the unmanned aerial vehicle is achieved, phi is the roll angle of the unmanned aerial vehicle around the speed vector, and g is the gravity acceleration. The output characteristic output_size is 6×1, which indicates that the predicted data is the 6 kinematic characteristics of the enemy unmanned aerial vehicle in the 6 th second;
1-4), establishing a neural network layer linear_bbb with bayesian characteristics, wherein the neural network layer defines that the weight w and the bias b of each node are random variables obtained by sampling in a normal distribution with a mean value of 0 and a variance of 1, the mean value w_mu of the weights and the variance w_rho of the weights in the neural network layer are matrixes with the shape of (input_size) and the bias weight b_mu and the bias variance b_rho are matrixes with the shape of (output_size);
1-5), constructing a network mlp_bbb, creating a linear_bbb instance defined above in the network, the bayesian neural network using a lower bound of evidence ELBO (Evidence Lower Bound) in variance inference as a loss function, and defining a calculation formula for calculating a log prior distribution log_priority, a log posterior distribution log_post and a log likelihood log_like in a linear_bbb layer in a forward propagation method, and defining a network loss function sample_elambo as follows:
loss=log_post-log_prior-log_like (7)
in the loss function of the network, log_post-log_priority is the complexity cost, and log_like is the error cost;
1-6), training and testing on a training set by using the network, and storing the trained model structure and parameters after the test passes.
Further, in the second step, situation information data s of the enemy unmanned aerial vehicle is perceived in real time through a sensor system of the unmanned aerial vehicle b The data should be the latest 5 seconds situation information of the enemy unmanned aerial vehicle, namely s b The shape of the model is (5, 6), wherein 5 is situation information of the last five seconds of the enemy unmanned aerial vehicle, 6 is 6 kinematic features of the enemy unmanned aerial vehicle per second, and perceived data are transmitted to the trained Bayesian neural network in the steps 1-6).
Further, the third step comprises the following steps:
3-1), acquiring ground enemy situation information data S according to the my unmanned aerial vehicle b Outputting situation information data S of the enemy unmanned aerial vehicle in the next second by using the Bayesian neural network built in the step 1-6), wherein S= (x, y, z, v, ψ, gamma);
3-2) transmitting the predicted data to the own unmanned aerial vehicle, and utilizing the situation information, the own unmanned aerial vehicle preempts the favorable situation in the air combat.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, uncertainty modeling is introduced, the Bayesian neural network can carry out uncertainty additivity on the predicted result, which is not possessed by the traditional neural network, in unmanned plane situation prediction, environmental change, sensor noise and other factors back cause uncertainty, and the Bayesian neural network can provide more accurate predicted results and give probability distribution of each predicted result, so that the reliability of prediction is better quantified.
(2) The method and the system have better performance under the condition of insufficient data, unmanned aerial vehicle situation prediction often faces the challenge of insufficient data, the traditional neural network is easy to be subjected to over fitting under the condition, and the Bayesian neural network models network parameters by introducing prior probability, so that the problem of over fitting is relieved, and the model is more stable in performance on a small data set.
(3) The confidence and probability distribution of each predicted value can be given to the situation prediction of the enemy unmanned aerial vehicle, the interpretability is very important in an unmanned aerial vehicle decision system, and compared with a traditional neural network, the Bayesian neural network is easier to interpret the predicted result, so that a decision maker can better understand the prediction logic of the model.
(4) The bayesian neural network in the invention provides probability distribution of parameters, which can be used for clustering and anomaly detection tasks, which is very effective for identifying anomalies in enemy unmanned aerial vehicle behaviors.
(5) The Bayesian neural network can more effectively manage the super parameters related in the neural network and better explore the parameter space, so that the statistical information of the super parameters can be shared among different tasks, and the efficiency and the stability of the model are improved.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is a general flow chart of a situation method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network.
Fig. 2 is a flow chart of post-processing of collected situation data of an enemy fixed wing unmanned aerial vehicle.
Fig. 3 is a network training flow chart of the bayesian neural network-based fixed wing unmanned aerial vehicle situation prediction method.
Fig. 4 is a change curve of a loss function during network training and testing of the bayesian neural network-based fixed wing unmanned aerial vehicle situation prediction method.
Fig. 5 is a change curve of complexity cost and error cost in network training of the bayesian neural network-based fixed wing unmanned aerial vehicle situation prediction method.
Fig. 6 is a graph comparing a predicted track and an actual track obtained by the situation prediction method of the fixed wing unmanned aerial vehicle based on the Bayesian neural network.
Fig. 7 is a graph comparing a predicted track and an actual track obtained after the bayesian neural network is replaced by the bayesian neural network-based fixed wing unmanned aerial vehicle situation prediction method.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Example 1
The invention provides a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network, which comprises the following steps:
step 1), establishing a Bayesian network suitable for time sequence prediction, and storing trained network parameters and structures;
step 2) acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle, and transmitting the situation information of the enemy to a Bayesian neural network after finishing;
and 3) predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network, and finally transmitting the predicted situation information of the enemy unmanned aerial vehicle back to the unmanned aerial vehicle.
Step 1), establishing a Bayesian network suitable for time sequence prediction, and storing trained network parameters and structures, wherein the specific steps are as follows:
1-1), collecting situation information data of the enemy unmanned aerial vehicle which are sufficient and are not repeated independently of each other for the processing flow after the collected enemy situation data as shown in fig. 2, wherein the data shape is (tracks_num, time_long, features_num), wherein the tracks_num represents the collected situation information quantity of the enemy unmanned aerial vehicle, the time_long represents the time step of each piece of collected situation information data of the enemy unmanned aerial vehicle, and the features_num represents the situation information feature quantity of the enemy unmanned aerial vehicle input into the network.
1-2) randomly selecting training data and test data for training the bayesian neural network from the data set collected in step 1-1) using a dual-loop function and a slicing operation;
1-2-1), selecting an index idex for the first dimension tracks_num and the second dimension time_long of the data collected in the step 1-1) by using a double-loop function, wherein the total amount of selection is batch_size=3000;
1-2-2), taking the index idex selected in the step 1-2-1) as the start end of a slice, selecting 6 continuous time steps backwards as one piece of data, selecting batch_size pieces of data altogether, wherein each piece of data is 6 in length, and each piece of data is marked as alpha;
1-2-3), the data selected in steps 1-2-1) and 1-2-2) are three-dimensional arrays in the shape of (batch_size, time_step, features_num), wherein batch_size=3000 is the total batch for training and testing, time_step=6 is the time step of single-segment unmanned plane situation information, and features_num represents the feature quantity of situation information of enemy unmanned plane;
1-2-4), selecting 80% of the batch_size as training set three_data after the collected data sets are out of order, wherein the shape is (2400, 6, features_num), and the remaining 20% is as test set test_data, and the shape is (800, 6, features_num);
1-2-5), for each data segment α in the training set train_data and the test set test_data, the first 5 seconds are sliced out as data values, the 6 th second is taken as a label, and 4 data sets are obtained at this time respectively:
input data input_train for training, shape (2400, 5, features_num);
the label data set output_train corresponding to the training data set is in the shape of (2400, 1, features_num);
input data input_test for test, shape (800, 5, features_num);
the label data set output_test corresponding to the test data set has a shape of (800, 1, features_num).
1-3), the third dimension of the data set obtained in step 1-2-3)The number of features_num is 6, denoted S a = (x, y, z, v, ψ, γ) wherein:
x is the pixel coordinate value of the unmanned aerial vehicle on the x axis, and the calculation formula of x is as follows:
x=vcosγsinψ (1)
y is the pixel coordinate value of the unmanned plane on the y axis, and the calculation formula of y is as follows:
y=vcosγcosψ (2)
z is the pixel coordinate value of the unmanned plane on the z axis, and the calculation formula of z is as follows:
z=vsinγ (3)
v is the speed of the unmanned aerial vehicle, and the calculation formula of v is:
v=g(n x -sinγ) (4)
and the calculation formula of the psi is as follows:
gamma is the track angle of the unmanned aerial vehicle, and the calculation formula of gamma is:
n x is unmanned aerial vehicle tangential overload, n z The normal overload of the unmanned aerial vehicle is achieved, phi is the roll angle of the unmanned aerial vehicle around the speed vector, and g is the gravity acceleration. The output characteristic output_size is 6×1, which indicates that the predicted data is the 6 kinematic characteristics of the enemy unmanned aerial vehicle in the 6 th second;
1-4), establishing a neural network layer linear_bbb with bayesian characteristics, wherein the neural network layer defines that the weight w and the bias b of each node are random variables obtained by sampling in a normal distribution with a mean value of 0 and a variance of 1, the mean value w_mu of the weights and the variance w_rho of the weights in the neural network layer are matrixes with the shape of (input_size) and the bias weight b_mu and the bias variance b_rho are matrixes with the shape of (output_size);
1-5), constructing a network mlp_bbb, creating a linear_bbb instance defined above in the network, the bayesian neural network using a lower bound of evidence ELBO (Evidence Lower Bound) in variance inference as a loss function, and defining a calculation formula for calculating a log prior distribution log_priority, a log posterior distribution log_post and a log likelihood log_like in a linear_bbb layer in a forward propagation method, and defining a network loss function sample_elambo as follows:
loss=log_post-log_prior-log_like (7)
in the loss function of the network, log_post-log_priority is the complexity cost, and log_like is the error cost;
1-6), training the constructed Bayesian neural network by using the collected enemy situation information data, setting a training batch epoch_num before training, finishing the network training according to the flow in the second graph, and saving the network parameters and the structure after finishing the training, wherein the change curve of the loss function is shown in figure 4, and the change curve of the complexity cost and the error cost of the loss function is shown in figure 5 when the network and the test network are trained.
Step 2), sensing situation information data S of the enemy unmanned aerial vehicle in real time through a sensor system of the unmanned aerial vehicle b The data should be situation information of the enemy unmanned aerial vehicle for the latest 5 seconds, namely S b The shape of the model is (5, 6), wherein the first dimension is 5, which represents situation information of the last five seconds of the enemy unmanned aerial vehicle, the second dimension is 6, which represents 6 kinematic features of the enemy unmanned aerial vehicle per second, and the perceived data are transmitted to the Bayesian neural network trained in the step 1-6).
Step 3),
3-1), acquiring ground enemy situation information data S according to the my unmanned aerial vehicle b Outputting situation information data S of the enemy unmanned aerial vehicle in the next second by utilizing the Bayesian neural network built in the step 1-6), wherein S= (x, y, z, v, psi and gamma), splicing the track data of the enemy unmanned aerial vehicle obtained by prediction to obtain a track diagram shown in fig. 6, wherein a thinner curve is the track diagram of the enemy fixed wing unmanned aerial vehicle predicted by the Bayesian neural network, and a thicker curve is the actual track diagram of the enemy fixed wing unmanned aerial vehicle;
3-2) transmitting the predicted data to the own unmanned aerial vehicle, and utilizing the situation information, the own unmanned aerial vehicle preempts the favorable situation in the air combat.
Example 2
Referring to fig. 7, the technical scheme provided by the invention is that a fixed wing situation prediction method based on an MLP neural network comprises the following specific steps:
step 1) establishing an MLP neural network suitable for time sequence prediction, and storing network parameters and structures.
Step 2) acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle, and transmitting the situation information of the enemy to a Bayesian neural network after finishing;
and 3) predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network, and finally transmitting the predicted situation information of the enemy unmanned aerial vehicle back to the unmanned aerial vehicle.
Step 1), establishing a MLP neural network suitable for time sequence prediction, and storing trained network parameters and structures, wherein the specific steps are as follows:
steps 1-1) to 1-3) are the same as those of example 1 described above.
1-4) establishing a 3-layer neural network MLP formed by a fully connected layer group, using a mean square error MSE as a loss function, prescribing that the calculation of the loss function is carried out in each training, and saving parameters and structures of the MLP neural network after training.
Step 2) and step 3) are similar to those of the above-described embodiment 1, and a pair of a trajectory diagram and an actual trajectory diagram of the unmanned aerial vehicle predicted by using the MLP network is shown in fig. 7.
Comparing fig. 6 with fig. 7, the bayesian neural network of the present invention used in fig. 6 obtains a more accurate prediction result, which is closer to the actual enemy unmanned aerial vehicle track. The Bayesian neural network proposed by the invention has stronger generalization capability when less training data is shown.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (3)

1. A situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network is characterized by comprising the following steps:
step one, establishing a Bayesian network suitable for time sequence prediction;
the first step comprises the following steps:
s11: collecting situation information data of sufficient and mutually independent non-repeated enemy unmanned aerial vehicles, wherein the data are in the shape of (packages_num, time_long, features_num), the packages_num represents the collected situation information quantity of the enemy unmanned aerial vehicles, the time_long represents the time step of each piece of collected situation information data of the enemy unmanned aerial vehicles, and the features_num represents the situation information feature quantity of the enemy unmanned aerial vehicles input into the network;
s12: randomly selecting training data and test data for training the Bayesian neural network from the data set collected in the step S11 by using a double-loop function and a slicing operation;
1) Selecting indexes idex for the first dimension tracks_num and the second dimension time_long of the data collected in the step S11 by using a double-loop function, and selecting the total amount as batch_size epsilon (1000, tracks_num×time_long);
2) Taking the index idex selected in the step 1) in S12 as the start end of the slice, backward selecting continuous time-long time steps as one piece of data, and co-selecting batch-size data segments, wherein each segment is time-step in length, and each data segment is marked as alpha;
3) The data selected in the steps 1) and 2) are three-dimensional arrays with the shape of (batch_size, time_step, features_num), wherein the batch_size is the total batch of training and testing, the time_step is the time step of single-section unmanned aerial vehicle situation information, and the features_num represents the feature quantity of the situation information of the enemy unmanned aerial vehicle;
4) Randomly selecting 70% -80% of the collected data sets after the data sets are out of order as training set three_data in the shape of (train_data, time_step, features_num), and the rest of the data sets as test set test_data in the shape of (batch_size-train_data, time_step, features_num)
5) For each data segment alpha in the training set train_data and the test set test_data, the previous input_step seconds are sliced out as data values, and the rest of the data segment alpha is used as a label, so that the following data sets are obtained together:
input data input_train for training, the shape is (train_data, input_step, features_num);
the label output_train corresponding to the training set is shaped as (train_data, output_step, features_num);
input data input_test for test, the shape is (test_data, input_step, features_num)
The label output_test corresponding to the test data set is in the shape of (test_data, output_step, features_num);
s13: the third dimension of the dataset obtained in step 3) of S12, denoted S a = (x, y, z, v, ψ, γ) wherein:
x is the pixel coordinate value of the unmanned aerial vehicle on the x axis, and the calculation formula of x is as follows:
x=vcosγsinψ (1)
y is the pixel coordinate value of the unmanned plane on the y axis, and the calculation formula of y is as follows:
y=vcosγcosψ (2)
z is the pixel coordinate value of the unmanned plane on the z axis, and the calculation formula of z is as follows:
z=vsinγ (3)
v is the speed of the unmanned aerial vehicle, and the calculation formula of v is:
v=g(n x -sinγ) (4)
and the calculation formula of the psi is as follows:
gamma is the track angle of the unmanned aerial vehicle, and the calculation formula of gamma is:
n x is unmanned aerial vehicle tangential overload, n z The normal overload of the unmanned aerial vehicle is realized, phi is the roll angle of the unmanned aerial vehicle around the speed vector, and g is the gravity acceleration; the output characteristic output_size is features_num×output_step, and the predicted data is the kinematic characteristic of each step of the enemy unmanned aerial vehicle in the output_step;
s14, establishing a neural network layer Linear-BBB with Bayesian characteristics, wherein the neural network layer defines that the weight w and the bias b of each node are random variables obtained by sampling in normal distribution with the mean mu and the variance rho, the mean w_mu of the weights and the variance w_rho of the weights in the neural network layer are matrixes with the shape of (input_size, output_size), and the weights b_mu of the bias and the variance b_rho of the bias are matrixes with the shape of (output_size);
s15: constructing a network MLP_BBB, creating the linear_BBB example defined above in the network, wherein the Bayesian neural network uses the evidence lower bound ELBO (Evidence Lower Bound) in variation inference as a loss function, and provides for calculating log prior distribution log_priority, log posterior distribution log_post and log likelihood log_like in the linear_BBB layer in a forward propagation method, and defining a calculation formula of the network loss function sample_elambo as follows:
loss=log_post-log_prior-log_like (7)
in the loss function of the network, log_post-log_priority is the complexity cost, and log_like is the error cost;
s16, training on a training set by using the network, and storing the trained model structure and parameters;
step two, acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle;
and thirdly, predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network.
2. The bayesian neural network-based fixed wing unmanned aerial vehicle situation prediction method according to claim 1, wherein in the second step, the enemy unmanned aerial vehicle is perceived in real time through a sensor system of the unmanned aerial vehicleSituation information data s of (2) b And (3) transmitting the perceived data to the Bayesian neural network trained in the step S16.
3. The bayesian neural network-based fixed wing unmanned aerial vehicle situation prediction method according to claim 1, wherein the third step comprises the following steps:
s31, according to enemy situation information data S acquired by the unmanned aerial vehicle b Outputting situation information data S of the enemy unmanned aerial vehicle in the next second by using the neural network built in the step S14, wherein S= (x, y, z, v, ψ, gamma);
s32: and transmitting the predicted data back to the own unmanned aerial vehicle, and taking advantage of the situation information by the own unmanned aerial vehicle in the air combat.
CN202310944321.6A 2023-07-28 2023-07-28 Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network Active CN116993010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310944321.6A CN116993010B (en) 2023-07-28 2023-07-28 Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310944321.6A CN116993010B (en) 2023-07-28 2023-07-28 Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network

Publications (2)

Publication Number Publication Date
CN116993010A CN116993010A (en) 2023-11-03
CN116993010B true CN116993010B (en) 2024-02-06

Family

ID=88520902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310944321.6A Active CN116993010B (en) 2023-07-28 2023-07-28 Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network

Country Status (1)

Country Link
CN (1) CN116993010B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112305530A (en) * 2020-11-02 2021-02-02 上海神添实业有限公司 Target detection method for unmanned aerial vehicle group, electronic equipment and storage medium
CN112966773A (en) * 2021-03-24 2021-06-15 山西大学 Unmanned aerial vehicle flight condition mode identification method and system
CN115993835A (en) * 2022-12-27 2023-04-21 西北工业大学 Target maneuver intention prediction-based short-distance air combat maneuver decision method and system
CN116187169A (en) * 2022-12-30 2023-05-30 中国人民解放军国防科技大学 Unmanned aerial vehicle cluster intention inference algorithm and system based on dynamic Bayesian network
CN116432514A (en) * 2023-02-21 2023-07-14 天津大学 Interception intention recognition strategy simulation system and method for unmanned aerial vehicle attack and defense game
CN116467950A (en) * 2023-04-24 2023-07-21 哈尔滨工业大学 Unmanned aerial vehicle flight data anomaly detection method based on uncertain characterization

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11074495B2 (en) * 2013-02-28 2021-07-27 Z Advanced Computing, Inc. (Zac) System and method for extremely efficient image and pattern recognition and artificial intelligence platform
US11448774B2 (en) * 2018-08-16 2022-09-20 Movano Inc. Bayesian geolocation and parameter estimation by retaining channel and state information
EP3722894B1 (en) * 2019-04-09 2022-08-10 Robert Bosch GmbH Control and monitoring of physical system based on trained bayesian neural network
EP3767541A1 (en) * 2019-07-17 2021-01-20 Robert Bosch GmbH A machine learnable system with conditional normalizing flow
CN113095481B (en) * 2021-04-03 2024-02-02 西北工业大学 Air combat maneuver method based on parallel self-game

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112305530A (en) * 2020-11-02 2021-02-02 上海神添实业有限公司 Target detection method for unmanned aerial vehicle group, electronic equipment and storage medium
CN112966773A (en) * 2021-03-24 2021-06-15 山西大学 Unmanned aerial vehicle flight condition mode identification method and system
CN115993835A (en) * 2022-12-27 2023-04-21 西北工业大学 Target maneuver intention prediction-based short-distance air combat maneuver decision method and system
CN116187169A (en) * 2022-12-30 2023-05-30 中国人民解放军国防科技大学 Unmanned aerial vehicle cluster intention inference algorithm and system based on dynamic Bayesian network
CN116432514A (en) * 2023-02-21 2023-07-14 天津大学 Interception intention recognition strategy simulation system and method for unmanned aerial vehicle attack and defense game
CN116467950A (en) * 2023-04-24 2023-07-21 哈尔滨工业大学 Unmanned aerial vehicle flight data anomaly detection method based on uncertain characterization

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Target Threat Assessment in Air Combat with BP Neural Network for UAV;Lei Sheng etc.;Journal of Physics Conference series;全文 *
基于机动动作库的UCAV逃逸机动决策;谭目来等;无人系统技术(第04期);全文 *
基于深度神经网络的无人作战飞机自主空战机动决策;张宏鹏等;兵工学报(第08期);全文 *
基于混合动态贝叶斯网的无人机空战态势评估;孟光磊等;指挥控制与仿真(第04期);全文 *
改进BAS-TIMS算法在空战机动决策中的应用;嵇慧明等;国防科技大学学报(第04期);全文 *

Also Published As

Publication number Publication date
CN116993010A (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN108334059B (en) Particle swarm optimization-based BP neural network model fault diagnosis method
Guo et al. Hierarchical adaptive deep convolution neural network and its application to bearing fault diagnosis
CN108596327B (en) Seismic velocity spectrum artificial intelligence picking method based on deep learning
CN108960303B (en) Unmanned aerial vehicle flight data anomaly detection method based on LSTM
CN113408423B (en) Aquatic product target real-time detection method suitable for TX2 embedded platform
CN105224872A (en) A kind of user's anomaly detection method based on neural network clustering
US20220164660A1 (en) Method for determining a sensor configuration
CN110020712B (en) Optimized particle swarm BP network prediction method and system based on clustering
CN110956154A (en) Vibration information terrain classification and identification method based on CNN-LSTM
CN110728698A (en) Multi-target tracking model based on composite cyclic neural network system
EP3871938A1 (en) Method and device for determining pavement rating, storage medium and automobile
CN111260124A (en) Chaos time sequence prediction method based on attention mechanism deep learning
CN104459661B (en) Method for detecting rapid artillery type dim target
CN111191824B (en) Power battery capacity attenuation prediction method and system
CN112001433A (en) Flight path association method, system, equipment and readable storage medium
CN111579993A (en) Lithium battery capacity online estimation method based on convolutional neural network
CN111445498A (en) Target tracking method adopting Bi-L STM neural network
CN112287592A (en) Industrial equipment fault diagnosis method and system based on deep confidence network
CN115859077A (en) Multi-feature fusion motor small sample fault diagnosis method under variable working conditions
CN111967308A (en) Online road surface unevenness identification method and system
CN116993010B (en) Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network
CN113030940B (en) Multi-star convex type extended target tracking method under turning maneuver
Lim et al. Gaussian process auto regression for vehicle center coordinates trajectory prediction
Xing et al. Driving cycle recognition for hybrid electric vehicle
CN117540626B (en) Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant