CN117540626B - Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network - Google Patents
Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network Download PDFInfo
- Publication number
- CN117540626B CN117540626B CN202311428417.3A CN202311428417A CN117540626B CN 117540626 B CN117540626 B CN 117540626B CN 202311428417 A CN202311428417 A CN 202311428417A CN 117540626 B CN117540626 B CN 117540626B
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- enemy
- data
- situation information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013531 bayesian neural network Methods 0.000 title claims abstract description 51
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012549 training Methods 0.000 claims description 33
- 238000013528 artificial neural network Methods 0.000 claims description 29
- 238000012360 testing method Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000002349 favourable effect Effects 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims 1
- 230000001351 cycling effect Effects 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 12
- 230000015654 memory Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000007787 long-term memory Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network, belonging to the technical field of unmanned aerial vehicle situation prediction; the method solves the technical problem that the unmanned aerial vehicle on the my side cannot predict the future situation of the unmanned aerial vehicle on the enemy side under an uncertain environment. The technical proposal is as follows: establishing a Bayesian network suitable for time sequence prediction and collecting limited situation information of the enemy unmanned aerial vehicle; the situation information of the enemy unmanned aerial vehicle is used as input, and the established Bayesian neural network is used for predicting the situation of the enemy unmanned aerial vehicle at the next moment; and predicting again by taking the single-moment predicted value as input to form situation information of the future time period of the enemy unmanned aerial vehicle. The beneficial effects of the invention are as follows: the situation of the enemy unmanned aerial vehicle under a period of time can be predicted by using limited situation information in a battlefield environment, so that the enemy unmanned aerial vehicle can occupy battlefield initiative, the battlefield capability of the unmanned aerial vehicle is improved, and the battlefield loss ratio of the enemy unmanned aerial vehicle is reduced.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicle situation prediction, in particular to a fixed wing unmanned aerial vehicle situation prediction method based on a Bayesian neural network.
Background
In recent years, a fixed-wing unmanned aerial vehicle plays a key role in military operations due to the advantages of long endurance, high-speed flight, strong loading capacity and the like, and can perform various tasks such as reconnaissance, striking, air countermeasure and the like. In the combat environment, the situation information of the opposite unmanned aerial vehicle is obtained through prediction, which is important for specifying tactical decisions and implementing striking actions, so that the situation information of the enemy unmanned aerial vehicle is predicted, and not only is a high-precision sensor such as a radar, an infrared sensor, a camera and the like required, but also a mature and reliable prediction algorithm is required to be applied to complete the prediction by using data obtained from the sensor.
In an air combat environment, in order to reduce the combat loss ratio of the my unmanned aerial vehicle, it is often necessary to rely on very reliable algorithms to help the unmanned aerial vehicle predict future situation information of the enemy unmanned aerial vehicle. Some conventional algorithms have excessive confidence in the predictions and cannot evaluate the uncertainty of the predictions, and these defects can have irrecoverable consequences in military operations. For example, in the "Task offloading scheme combining deep reinforcement learning and convolutional neural networks for vehicle trajectory prediction in smart cities" paper, convolutional neural networks, which are neural networks dedicated to time series, predict vehicle trajectories, often require a large amount of data information, and convolutional neural networks, which are black box models, lack interpretation, and lack reliability in military operations. The method aims to solve the problem that limited data cannot be utilized to provide reliable prediction for the enemy unmanned aerial vehicle under the uncertain environment of a battlefield.
Disclosure of Invention
The invention aims to provide a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network, which can quickly predict situation information of an enemy unmanned aerial vehicle, gives uncertainty of prediction and is beneficial to reducing over fitting of the neural network.
The invention is characterized in that: firstly, establishing a Bayesian network suitable for time sequence prediction, acquiring the latest limited situation information of an enemy unmanned aerial vehicle through a sensor system of the enemy unmanned aerial vehicle, predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network, inputting data obtained by network prediction into the network again as input, and obtaining the situation information of the enemy unmanned aerial vehicle in a future time period through multiple cycles.
The invention gives the unmanned aerial vehicle the ability to predict the future situation information of the enemy unmanned aerial vehicle and gives the uncertainty of the prediction result.
The invention is realized by the following measures: a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network comprises the following steps:
s1, establishing a Bayesian network suitable for time sequence prediction, and storing trained network parameters and structures, so that real-time prediction can be performed after real-time situation information of an enemy unmanned aerial vehicle is collected in a later period;
s2, acquiring the latest limited situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle, and transmitting the situation information of the enemy to a Bayesian neural network after finishing, so that the neural network can predict quickly;
S3, taking situation information of the enemy unmanned aerial vehicle as input, and predicting the situation of the enemy unmanned aerial vehicle at the next moment by using an established Bayesian neural network;
S4, predicting again by taking a single-moment predicted value obtained by network prediction as input, splicing a predicted result with an original data segment to form situation data of a future time segment of the enemy unmanned aerial vehicle, and finally transmitting the predicted situation information of the enemy unmanned aerial vehicle back to the enemy unmanned aerial vehicle, so that the enemy unmanned aerial vehicle can occupy a favorable position in advance in the future time segment.
Further, the first step includes the following steps:
1-1), collecting sufficient situation information data of the enemy unmanned aerial vehicle, arranging the collected data information into a three-dimensional array to form a database, wherein the first dimension is the quantity of the collected situation information data of the enemy unmanned aerial vehicle, the second dimension is the time step of each piece of the collected situation information data, and the third dimension is the characteristic quantity of the situation information of the input neural network;
1-2) randomly selecting training data and test data for training the bayesian neural network from the data set collected in step 1-1) using a dual-loop function and a slicing operation;
1-2-1), selecting an index for the first dimension and the second dimension of the collected data using a double loop;
1-2-2), taking the selected index as the start end of slicing, slicing backwards, and selecting enough data segments with the same data length of each segment through cyclic operation; taking the selected index as the start end of slicing, slicing backwards, and selecting enough data segments with the same data length by cyclic operation;
1-2-3), arranging the data segments collected by slicing to form a three-dimensional array as a data set, wherein the first dimension represents the total number of the collected data segments, the second dimension represents the time step of each data segment, and the third dimension represents the situation information feature quantity of the enemy unmanned plane;
1-2-4), after the collected data sets are disordered, selecting 70% -80% of the first dimension of the data sets as training sets, and the rest as test sets;
1-2-5), for each data segment of the training set and the test set in the data set, the first half of the second dimension is cut out as data values using a slicing operation, and the remaining part is the label, at which time four sub-data sets are obtained in total, respectively an input data set for training, a label data set corresponding to the training set, an input data set for testing, and a label data set corresponding to the test set. The third dimension of the four sub-data sets contains 6 elements, which are situation characteristic information of unmanned aerial vehicles per second and are recorded as Wherein: 1-3), the third dimension of the data set obtained in step 1-2-3) has a number of features_num of 6, noted/>Wherein:
Wherein n x is the tangential overload of the unmanned aerial vehicle, n z is the normal overload of the unmanned aerial vehicle, phi is the roll angle of the unmanned aerial vehicle around the speed vector, and g is the gravitational acceleration. The output characteristic output_size is 6×1, which indicates that the predicted data is the 6 kinematic characteristics of the enemy unmanned aerial vehicle in the 6 th second;
1-4), establishing a neural network layer with Bayesian characteristics, wherein the neural network layer defines that the weight and bias of each node are random variables obtained by sampling in normal distribution, the mean value and variance of the weight in the neural network layer are two-dimensional matrixes, the first dimension is the input characteristic quantity, the second dimension is the output characteristic quantity, and the weight and variance of the bias are one-dimensional matrixes with the shape of the output characteristic quantity;
1-5), constructing a Bayesian neural network according to the defined neural network layer with Bayesian characteristics, wherein the Bayesian neural network uses the evidence lower bound in variation inference as a loss function, and provides that the log prior distribution, the log posterior distribution and the log likelihood in the neural network layer with Bayesian characteristics are calculated in a forward propagation method, and the calculation mode of the network loss function is defined as the log posterior distribution minus the log prior distribution minus the log likelihood;
1-6), training on the training set by using the network, and storing the trained network model structure and parameters.
In the second step, situation information data of the enemy unmanned aerial vehicle is used as input, and the situation information of the enemy unmanned aerial vehicle at the next moment is predicted through a network.
Further, the second step comprises the following steps: a step of
2-1) Arranging the situation information of the enemy unmanned aerial vehicle collected by the unmanned aerial vehicle in real time into an array conforming to an input network, and marking the array as S b;
2-2) inputting S b into a Bayesian neural network after training, and outputting the network to obtain situation information of the enemy unmanned aerial vehicle at the next moment.
Further, in the third step, the predicted value of the single moment is used as input to predict again, so that the predicted value of the single moment of the situation information of the future time period of the enemy unmanned aerial vehicle is used as input to predict again, and the situation information of the future time period of the enemy unmanned aerial vehicle is formed.
Further, the third step comprises the following steps:
3-1) arranging the situation information of the enemy unmanned aerial vehicle at a single moment into a one-dimensional array S h which is the same as a second-dimensional characteristic element of the two-dimensional array S b;
3-2) splicing the one-dimensional array S T0 to a two-dimensional array S X formed by the extreme end of the two-dimensional array S b according to a first-dimensional sequence, and forming a new two-dimensional array S T1 according to the initial end two-dimensional array S of the two-dimensional array S X cut in the first dimension;
3-3) inputting the obtained two-dimensional array S T1 into a Bayesian neural network through circulation to obtain situation information of the enemy unmanned aerial vehicle at the next moment, and circularly operating in the step 2) to obtain a two-dimensional array S T2;
3-4) circularly obtaining situation information data S T of future time periods of the enemy unmanned aerial vehicle by the 3-1) -3-3), wherein the situation data is all composed of single-moment enemy unmanned aerial vehicle situation data predicted by a Bayesian neural network;
3-5) transmitting situation information data of future time periods of enemy to the unmanned aerial vehicle, and utilizing the situation information, the unmanned aerial vehicle seizes the favorable situation in the air combat.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, uncertainty modeling is introduced, the Bayesian neural network can carry out uncertainty additivity on a predicted result, which is not possessed by the traditional neural network, in unmanned plane situation prediction, factors such as environmental change, sensor noise and the like cause uncertainty, and the Bayesian neural network can provide a more accurate predicted result and better quantify the reliability of prediction.
(2) The method is excellent in limited data, and in unmanned aerial vehicle situation prediction, insufficient data is a common challenge. While conventional neural networks are prone to overfitting in this case, bayesian neural networks reduce the risk of overfitting by introducing a priori probabilities, thus rendering them more robust on small datasets.
(3) The Bayesian neural network provides high interpretability, and is particularly suitable for interpreting situation prediction of the enemy unmanned aerial vehicle. Unlike traditional neural networks, bayesian neural networks more easily interpret their prediction logic, helping decision makers understand the underlying principles of model predictions.
(4) The Bayesian neural network in the invention can show stronger generalization capability, can more effectively process uncertainty, reduce the risk of over fitting, can better adapt to unseen data, can process environment and data instability, and can play an excellent role in complex tasks.
(5) The Bayesian neural network can still provide effective prediction when only limited data of the enemy unmanned aerial vehicle is collected, and can provide relatively reliable situation information of the enemy unmanned aerial vehicle in a future time period.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is a flowchart of a situation method of a fixed-wing unmanned aerial vehicle in an uncertain environment based on a bayesian neural network.
Fig. 2 is a flow chart of post-processing of collected situation data of an enemy fixed wing unmanned aerial vehicle.
Fig. 3 is a network training flow chart of a situation prediction method of a fixed wing unmanned aerial vehicle in an uncertain environment based on a bayesian neural network.
Fig. 4 is a histogram of MSE comparison between a predicted trajectory of each step and an actual trajectory in five-step prediction of a method for predicting a situation of a fixed-wing unmanned aerial vehicle in an uncertain environment based on a bayesian neural network; among them, fig. 4 (a) is a first step, fig. 4 (b) is a second step, fig. 4 (c) is a third step, fig. 4 (d) is a fourth step, and fig. 4 (e) is a fifth step.
Fig. 5 is a flowchart of a situation prediction method of a fixed wing unmanned aerial vehicle under an uncertain environment based on a bayesian neural network, when predicting again by using data obtained by network prediction.
Fig. 6 is a graph comparing a predicted track and an actual track obtained by the situation prediction method of the fixed wing unmanned aerial vehicle under the uncertain environment based on the Bayesian neural network.
Fig. 7 is a graph comparing a predicted trajectory and an actual trajectory obtained after replacing a bayesian neural network with a long-short-term memory neural network by the situation prediction method of a fixed-wing unmanned aerial vehicle under an uncertain environment based on the bayesian neural network.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Example 1
The embodiment provides a situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network, which comprises the following steps:
Step 1), establishing a Bayesian network suitable for time sequence prediction, and storing trained network parameters and structures;
Step 2) acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle, and transmitting the situation information of the enemy to a Bayesian neural network after finishing;
and 3) predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network, and finally transmitting the predicted situation information of the enemy unmanned aerial vehicle back to the unmanned aerial vehicle.
Step 1), establishing a Bayesian network suitable for time sequence prediction, and storing trained network parameters and structures, wherein the specific steps are as follows:
1-1), collecting situation information data of sufficient and mutually independent non-repeated enemy unmanned aerial vehicles for the processing flow after the collected enemy situation data, wherein the data is in a shape of (tracks _num, time_long, features_num), tracks _num represents the quantity of collected situation information of the enemy unmanned aerial vehicles, time_long represents the time step of each piece of collected situation information data of the enemy unmanned aerial vehicles, and features_num represents the quantity of situation information features of the enemy unmanned aerial vehicles input into the network.
1-2) Randomly selecting training data and test data for training the bayesian neural network from the data set collected in step 1-1) using a dual-loop function and a slicing operation;
1-2-1), selecting index idex for the first dimension tracks _num and the second dimension time_long of the data collected in the step 1-1) by using a double-loop function, wherein the total amount selected is batch_size epsilon (1000, tracks_num×time_long);
1-2-2), taking the index idex selected in the step 1-2-1) as the start end of a slice, selecting 6 continuous time steps backwards as one piece of data, selecting batch_size pieces of data altogether, wherein each piece of data is 6 in length, and each piece of data is marked as alpha;
1-2-3), the data selected in steps 1-2-1) and 1-2-2) are three-dimensional arrays in the shape of (batch_size, time_step, features_num), wherein batch_size=3000 is the total batch for training and testing, time_step=6 is the time step of single-segment unmanned plane situation information, and features_num represents the feature quantity of situation information of enemy unmanned plane;
1-2-4), selecting 80% of the batch_size as a training set trian _data after the collected data sets are out of order, wherein the shape is (2400, 6, features_num), and the remaining 20% is as a test set test_data, and the shape is (800, 6, features_num);
1-2-5), for each data segment α in the training set train_data and the test set test_data, the first 5 seconds are sliced out as data values, the 6 th second is taken as a label, and 4 data sets are obtained at this time respectively:
Input data input_train for training, shape (2400, 5, features_num);
the label data set output_train corresponding to the training data set is in the shape of (2400, 1, features_num);
Input data input_test for test, shape (800, 5, features_num);
The label data set output_test corresponding to the test data set is (800, 1, features_num), and the collection and division of the data set is shown in fig. 2.
1-3), The third dimension of the data set obtained in step 1-2-3) having a number of features_num of 6, noted asWherein:
n x is the tangential overload of the unmanned aerial vehicle, n z is the normal overload of the unmanned aerial vehicle, phi is the roll angle of the unmanned aerial vehicle around the speed vector, and g is the gravitational acceleration. The output characteristic output_size is 6×1, which indicates that the predicted data is the 6 kinematic characteristics of the enemy unmanned aerial vehicle in the 6 th second;
1-4), establishing a neural network layer linear_bbb with bayesian characteristics, wherein the neural network layer defines that the weight w and the bias b of each node are random variables obtained by sampling in a normal distribution with a mean value of 0 and a variance of 1, the mean value w_mu of the weights and the variance w_rho of the weights in the neural network layer are matrixes with the shape of (input_size, putput _size), and the weights b_mu of the bias and the variance b_rho of the bias are matrixes with the shape of (output_size);
1-5), constructing a network mlp_bbb, creating the above-defined linear_bbb instance in the network, wherein the bayesian neural network uses a lower evidence bound ELBO (Evidence Lower Bound) in variation inference as a loss function, and provides that a log prior distribution log_priority, a log posterior distribution log_post and a log likelihood log_like in a linear_bbb layer are calculated in a forward propagation method, and a calculation formula of a network loss function sample_ elambo is defined as follows:
loss=log_post-log_prior-log_like (2)
in the loss function of the network, log_post-log_priority is the complexity cost, and log_like is the error cost;
1-6), training the constructed Bayesian neural network by using the collected enemy situation information data, setting a training batch epoch_num before training, finishing the network training according to the flow in the second graph, and saving the network parameters and the structure after finishing the training, wherein the change curve of the loss function is shown in figure 4, and the change curve of the complexity cost and the error cost of the loss function is shown in figure 5 when the network and the test network are trained.
Step 2), sensing situation information data S b of the enemy unmanned aerial vehicle in real time through a sensor system of the enemy unmanned aerial vehicle, wherein the data are the latest situation information of the enemy unmanned aerial vehicle for 5 seconds, namely, the shape of S b is (5, 6), the first dimension is 5 which represents the last five seconds of the enemy unmanned aerial vehicle, the second dimension is 6 which represents 6 kinematic features of the enemy unmanned aerial vehicle per second, and transmitting the sensed data to the trained Bayesian neural network in the step 1-6).
Step 3),
3-1) Arranging the situation information of the enemy unmanned aerial vehicle at a single moment into a two-dimensional array S h with the same two-dimensional shape as the three-dimensional array S b;
3-2) splicing the two-dimensional array S T0 to a three-dimensional array S X formed by the extreme end of the three-dimensional array S b according to the first dimension, and forming a new three-dimensional array S T1 according to the initial end two-dimensional array S of the three-dimensional array S X cut according to the first dimension;
3-3) inputting the obtained three-dimensional array S T1 into a Bayesian neural network through circulation to obtain situation information of the enemy unmanned aerial vehicle at the next moment, and circularly operating in the step 2) to obtain a three-dimensional array S T2;
3-4) circulating the 3-1) -3-3) to obtain situation information data S T of future time periods of the enemy unmanned aerial vehicle, wherein the situation data is all composed of single-moment enemy unmanned aerial vehicle situation data predicted by a Bayesian neural network, and a program circulation flow chart is shown in figure 5;
3-5) transmitting situation information data of future time periods of enemy to the unmanned aerial vehicle, and utilizing the situation information, the unmanned aerial vehicle seizes the favorable situation in the air combat.
Example 2
Referring to fig. 7, the technical scheme provided in this embodiment is that a fixed wing situation prediction method based on a long-short-term memory neural network includes the following specific steps:
Step 1), a long-short-time memory neural network suitable for time sequence prediction is established, and network parameters and structures are saved.
Step 2) acquiring the latest situation information of the enemy unmanned aerial vehicle through a sensor system of the unmanned aerial vehicle, and transmitting the situation information of the enemy to a Bayesian neural network after finishing;
and 3) predicting the situation of the enemy unmanned aerial vehicle at the next moment by using the established Bayesian neural network, and finally transmitting the predicted situation information of the enemy unmanned aerial vehicle back to the unmanned aerial vehicle.
Step 1), establishing a long-short time memory neural network suitable for a time sequence prediction long-short time memory neural network, and storing trained network parameters and structures, wherein the specific steps are as follows:
steps 1-1) to 1-3) are the same as those of example 1 described above.
1-4) Establishing a 3-layer long and short-term memory neural network, using a mean square error MSE as a loss function, prescribing that the calculation of the loss function is carried out in each training, and saving the parameters and the structure of the long and short-term memory neural network after the training is completed.
Step 2) and step 3) are similar to those of the above-described embodiment 1, and a pair of a trajectory chart and an actual trajectory chart of the unmanned aerial vehicle predicted by using the long-short-term memory neural network is shown in fig. 7.
Comparing fig. 6 with fig. 7, the bayesian neural network used in fig. 6 obtains a more accurate prediction result, which is closer to the actual enemy unmanned aerial vehicle track. The bayesian neural network proposed in this embodiment with less training data is also shown to have a stronger generalization ability.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (1)
1. A situation prediction method of a fixed wing unmanned aerial vehicle based on a Bayesian neural network is characterized by comprising the following steps:
1) Establishing a database according to the situation information of the enemy fixed wing unmanned aerial vehicle, and constructing a Bayesian neural network for time sequence prediction;
2) The situation information of the enemy unmanned aerial vehicle is taken as input, and the situation information of the enemy unmanned aerial vehicle at the next moment is predicted through a network;
3) Predicting again by taking the single-moment predicted value obtained by network output as input to form situation information of future time periods of the enemy unmanned aerial vehicle, and transmitting back to the unmanned aerial vehicle;
Establishing a database according to situation information of the enemy fixed wing unmanned aerial vehicle, wherein the method comprises the following steps of:
1) Collecting sufficient situation information data of the enemy unmanned aerial vehicle, arranging the collected data information into a three-dimensional array to form a database, wherein the first dimension is the quantity of the collected situation information data of the enemy unmanned aerial vehicle, the second dimension is the time step of each piece of the collected situation information data, and the third dimension is the characteristic quantity of the situation information of the input neural network;
2) Randomly selecting the collected data by using double-circulation and slicing operation and dividing a training set and a testing set;
The method for randomly selecting and dividing the collected data by using the double-circulation and slicing operation comprises the following steps:
1) Selecting an index for the first dimension and the second dimension of the collected data using a double loop;
2) Taking the selected index as the start end of slicing, slicing backwards, and selecting enough data segments with the same data length by cyclic operation;
3) The data segments collected through slicing are arranged to form a three-dimensional array as a data set, wherein the first dimension represents the total number of the collected data segments, the second dimension represents the time step of each segment of data segment, and the third dimension represents the situation information feature quantity of the enemy unmanned aerial vehicle;
4) After the collected data sets are disordered, 75 to 90 percent of the first dimension of the data sets are selected as training sets, and the rest of the data sets are selected as test sets;
5) For each data segment of the training set and the test set in the data set, slicing the first half of the second dimension into data values and the rest into labels by using slicing operation, and obtaining four sub-data sets, namely an input data set for training, a label data set corresponding to the training set, an input data set for testing and a label data set corresponding to the test set, wherein the third dimension of the four sub-data sets contains 6 elements, is situation characteristic information of the unmanned aerial vehicle per second and is recorded as Wherein/>The increment of the unit time position of the unmanned plane in the x, y and z directions is respectively that of the unmanned plane/>For the speed increment of unmanned plane per unit time,/>The yaw angle increment and the pitch angle increment of the unmanned aerial vehicle in unit time are respectively;
the construction of the Bayesian neural network for the time sequence comprises the following steps:
1) Establishing a neural network layer with Bayesian characteristics, wherein the neural network layer defines that the weight and bias of each node are random variables obtained by sampling in normal distribution, the mean value and variance of the weight in the neural network layer are two-dimensional matrixes, the first dimension is the number of input features, the second dimension is the number of output features, and the weight and variance of the bias are one-dimensional matrixes with the shape of the number of output features;
2) Constructing a Bayesian neural network according to the defined neural network layer with Bayesian characteristics, wherein the Bayesian neural network uses the evidence lower bound in variation inference as a loss function, and provides that the log prior distribution, the log posterior distribution and the log likelihood in the neural network layer with Bayesian characteristics are calculated in a forward propagation method, and the calculation mode of the network loss function is defined as the log posterior distribution minus the log prior distribution minus the log likelihood;
3) Training on the training set by using the network, and storing the trained network model structure and parameters;
The method takes enemy unmanned aerial vehicle situation information as input, predicts the situation information of the enemy at the next moment through a network, and comprises the following steps:
1) The situation information of the enemy unmanned aerial vehicle collected in real time by the unmanned aerial vehicle is arranged into an array conforming to an input network, and is recorded as S b;
2) S b is input into a Bayesian neural network after training, and the network output obtains situation information of the enemy unmanned aerial vehicle at the next moment;
The predicted value of the single moment is used as input to predict again to form situation information of future time periods of the enemy unmanned aerial vehicle, and the method comprises the following steps:
1) The situation information of the enemy unmanned aerial vehicle at a single moment is arranged into a one-dimensional array S h with the same two-dimensional shape as the two-dimensional array S b;
2) Splicing the one-dimensional array S T0 to a two-dimensional array S X formed by the extreme end of the two-dimensional array S b according to the sequence of the first dimension, and cutting the initial end one-dimensional array S of the two-dimensional array S X according to the sequence of the first dimension to form a new two-dimensional array S T1;
3) Inputting the obtained two-dimensional array S T1 into a Bayesian neural network through circulation to obtain situation information of the enemy unmanned aerial vehicle at the next moment, and circularly operating in the step 2) to obtain a two-dimensional array S T2;
4) Cycling through 1) -3) to obtain situation information data S T of future time periods of the enemy unmanned aerial vehicle, wherein the situation information data S T is all composed of single-moment enemy unmanned aerial vehicle situation data predicted by a Bayesian neural network;
5) And transmitting situation information data of the enemy future period to the unmanned aerial vehicle, and utilizing the situation information, the unmanned aerial vehicle seizes the favorable situation in the air combat.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311428417.3A CN117540626B (en) | 2023-10-30 | 2023-10-30 | Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311428417.3A CN117540626B (en) | 2023-10-30 | 2023-10-30 | Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117540626A CN117540626A (en) | 2024-02-09 |
CN117540626B true CN117540626B (en) | 2024-05-14 |
Family
ID=89794957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311428417.3A Active CN117540626B (en) | 2023-10-30 | 2023-10-30 | Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117540626B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020046213A1 (en) * | 2018-08-31 | 2020-03-05 | Agency For Science, Technology And Research | A method and apparatus for training a neural network to identify cracks |
CN111240350A (en) * | 2020-02-13 | 2020-06-05 | 西安爱生无人机技术有限公司 | Unmanned aerial vehicle pilot dynamic behavior evaluation system |
CN114510078A (en) * | 2022-02-16 | 2022-05-17 | 南通大学 | Unmanned aerial vehicle maneuver evasion decision-making method based on deep reinforcement learning |
CN115993835A (en) * | 2022-12-27 | 2023-04-21 | 西北工业大学 | Target maneuver intention prediction-based short-distance air combat maneuver decision method and system |
CN116069056A (en) * | 2022-12-15 | 2023-05-05 | 南通大学 | Unmanned plane battlefield target tracking control method based on deep reinforcement learning |
CN116187169A (en) * | 2022-12-30 | 2023-05-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle cluster intention inference algorithm and system based on dynamic Bayesian network |
CN116700079A (en) * | 2023-06-04 | 2023-09-05 | 西北工业大学 | Unmanned aerial vehicle countermeasure occupation maneuver control method based on AC-NFSP |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3722894B1 (en) * | 2019-04-09 | 2022-08-10 | Robert Bosch GmbH | Control and monitoring of physical system based on trained bayesian neural network |
DE102019209457A1 (en) * | 2019-06-28 | 2020-12-31 | Robert Bosch Gmbh | Method for training an artificial neural network, artificial neural network, use of an artificial neural network and a corresponding computer program, machine-readable storage medium and corresponding device |
CN112529144B (en) * | 2019-09-17 | 2023-10-13 | 中国科学院分子细胞科学卓越创新中心 | Predictive learning method and system for short-term time sequence prediction |
-
2023
- 2023-10-30 CN CN202311428417.3A patent/CN117540626B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020046213A1 (en) * | 2018-08-31 | 2020-03-05 | Agency For Science, Technology And Research | A method and apparatus for training a neural network to identify cracks |
CN111240350A (en) * | 2020-02-13 | 2020-06-05 | 西安爱生无人机技术有限公司 | Unmanned aerial vehicle pilot dynamic behavior evaluation system |
CN114510078A (en) * | 2022-02-16 | 2022-05-17 | 南通大学 | Unmanned aerial vehicle maneuver evasion decision-making method based on deep reinforcement learning |
CN116069056A (en) * | 2022-12-15 | 2023-05-05 | 南通大学 | Unmanned plane battlefield target tracking control method based on deep reinforcement learning |
CN115993835A (en) * | 2022-12-27 | 2023-04-21 | 西北工业大学 | Target maneuver intention prediction-based short-distance air combat maneuver decision method and system |
CN116187169A (en) * | 2022-12-30 | 2023-05-30 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle cluster intention inference algorithm and system based on dynamic Bayesian network |
CN116700079A (en) * | 2023-06-04 | 2023-09-05 | 西北工业大学 | Unmanned aerial vehicle countermeasure occupation maneuver control method based on AC-NFSP |
Non-Patent Citations (2)
Title |
---|
Active Disturbance Rejection Generalized Predictive Control of a Quadrotor UAV via Quantitative Feedback Theory;YUN CHENG 等;《Digital Object Identifier》;20220413;37912-37923 * |
Fault Tolerant Control of a Quadrotor Unmanned Aerial Vehicle Based on Active Disturbance Rejection Control and Two-Stage Kalman Filter;YUSHENG DU 等;《Digital Object Identifier》;20230710;67556-67566 * |
Also Published As
Publication number | Publication date |
---|---|
CN117540626A (en) | 2024-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Scept: Scene-consistent, policy-based trajectory predictions for planning | |
EP3940604A1 (en) | Federated teacher-student machine learning | |
CN112596515B (en) | Multi-logistics robot movement control method and device | |
Zhang et al. | Velc: A new variational autoencoder based model for time series anomaly detection | |
CN112988723A (en) | Traffic data restoration method based on space self-attention-diagram convolution cyclic neural network | |
CN108596327B (en) | Seismic velocity spectrum artificial intelligence picking method based on deep learning | |
US20150254554A1 (en) | Information processing device and learning method | |
CN112115998B (en) | Method for overcoming catastrophic forgetting based on anti-incremental clustering dynamic routing network | |
US20220164660A1 (en) | Method for determining a sensor configuration | |
CN114548591A (en) | Time sequence data prediction method and system based on hybrid deep learning model and Stacking | |
CN111047078B (en) | Traffic characteristic prediction method, system and storage medium | |
CN104900063A (en) | Short distance driving time prediction method | |
CN114386466B (en) | Parallel hybrid clustering method for candidate signal mining in pulsar search | |
CN114004383A (en) | Training method of time series prediction model, time series prediction method and device | |
CN116777068A (en) | Causal transducer-based networked data prediction method | |
CN115454988A (en) | Satellite power supply system missing data completion method based on random forest network | |
CN118193978A (en) | Automobile roadblock avoiding method based on DQN deep reinforcement learning algorithm | |
CN117875037B (en) | BOPP film production line digital simulation modeling method and system | |
CN117540626B (en) | Fixed wing unmanned aerial vehicle situation prediction method based on Bayesian neural network | |
CN112527547B (en) | Mechanical intelligent fault prediction method based on automatic convolution neural network | |
Denham et al. | HDSM: A distributed data mining approach to classifying vertically distributed data streams | |
CN112651499A (en) | Structural model pruning method based on ant colony optimization algorithm and interlayer information | |
CN117319232A (en) | Multi-agent cluster consistency cooperative control method based on behavior prediction | |
CN117671938A (en) | Expressway on-road vehicle position prediction method | |
CN116628570A (en) | Fan blade icing failure detection method, device, storage medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |