CN114266355A - Tactical intention identification method based on BilSTM-Attention - Google Patents

Tactical intention identification method based on BilSTM-Attention Download PDF

Info

Publication number
CN114266355A
CN114266355A CN202111496364.XA CN202111496364A CN114266355A CN 114266355 A CN114266355 A CN 114266355A CN 202111496364 A CN202111496364 A CN 202111496364A CN 114266355 A CN114266355 A CN 114266355A
Authority
CN
China
Prior art keywords
intention
layer
attention
bilstm
tactical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111496364.XA
Other languages
Chinese (zh)
Inventor
宋亚飞
滕飞
王坚
王刚
雷蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Engineering University of PLA
Original Assignee
Air Force Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Engineering University of PLA filed Critical Air Force Engineering University of PLA
Publication of CN114266355A publication Critical patent/CN114266355A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a tactical intention identification method based on BilSTM-Attention, which comprises the following steps: step S1: describing the air target intention identification as a mapping of air war intention identification characteristics to air war intention types; step S2: and establishing a BiLSTM-Attention-based air war intention identification model. The invention has the beneficial effects that: a Bidirectional (Bidirectional) circulation mechanism and an Attention (Attention) mechanism are introduced on the basis of an LSTM-based network, so that the inference process of decision-making personnel for the air combat situation is simulated. Inputting the coded air combat characteristic vector into a BilSTM layer, and compared with the coded air combat characteristic vector put into the LSTM layer, comprehensively making judgment by fully utilizing information of historical time and future time; the output vector of the BilSTM layer is put into the Attention mechanism layer, so that the key information influencing the intention can be further highlighted, and the accuracy of intention identification is improved.

Description

Tactical intention identification method based on BilSTM-Attention
Technical Field
The invention belongs to a tactical intention identification method, and particularly relates to a tactical intention identification method based on BilSTM-Attention
Background
In modern information-based air combat, the explosive development of aeronautical science and military technology has led to an increasingly serious threat to airborne targets. Meanwhile, due to the continuous application of high technology, the characteristics of battlefield environment complexity, information asymmetry and the like are continuously emerged, and the enemy target intention is difficult to be accurately identified in real time from the complex air combat environment only by means of expert experience. Therefore, an intelligent reasoning method is urgently needed to break through the defects of the traditional manual mode and help the other party to capture the victory of the control of the emptiness and even the war.
In recent years, a great deal of intention recognition research has been carried out in the military field in order to meet the needs of combat decision systems. The existing method for identifying and researching the enemy target intention under the complex battlefield environment mainly comprises methods such as an evidence theory, template matching, an expert system, a Bayesian network, a neural network and the like.
For example, the characteristic information of an aerial target is measured by using a sensor of a ship, a confidence rule base is established, and the target intention is identified by using evidence reasoning and fusing multi-source information.
An intention recognition reasoning model is designed on a template formed on the basis of a situation database, and an intention recognition template matching method based on a D-S evidence theory is provided.
And (3) constructing a knowledge base by using domain expert knowledge, expressing the corresponding relation between the battlefield situation and the fighting intention in a rule form, and finally obtaining an inference result by using an inference engine.
Determining Bayesian network parameters according to military expert knowledge, representing features through nodes, representing transfer relations through directed arcs, representing relation strength through conditional probability, utilizing new event influence backward propagation to update the network parameters until a certain intention exceeds a threshold value, wherein the intention is an identification intention.
Collecting information in an actual battlefield, selecting proper characteristics from the information, carrying out data preprocessing to obtain a data set, inputting the data set into a neural network, obtaining a fighting intention identification rule by utilizing self-adaption and self-learning capabilities of the neural network, and then reasoning out an enemy target fighting intention by using an empty fighting intention identification rule.
As mentioned above, in the existing methods, it is difficult to effectively find hidden deep information from the target state features of time sequence variation by analyzing and calculating based on the feature information at a single time. In fact, the target intention is implemented through a series of tactical actions in a battlefield, so that the dynamic property of the target and the battlefield environment can present dynamic and time sequence change characteristics, and the fighting action of an enemy target can have certain deception and concealment, so that the inference of the enemy target intention according to the characteristic information at a single moment is not scientific enough.
The tactical intention intelligent identification model based on the long-term-memory (LSTM) network is provided aiming at the time sequence change characteristics, has a good effect on identifying the target combat intention, accords with the time sequence characteristics and the front-back logic relationship in battlefield situation information, can only judge the current information by using the historical time information, cannot use the future time information, and has a larger improvement space in the accuracy rate.
Disclosure of Invention
The invention aims to provide a tactical intention identification method based on BilSTM-Attention, which can realize accurate identification of an empty target tactical intention.
The technical scheme of the invention is as follows: a tactical intention identification method based on BilSTM-Attention comprises the following steps:
step S1: describing the air target intention identification as a mapping of air war intention identification characteristics to air war intention types;
step S2: and establishing a BiLSTM-Attention-based air war intention identification model.
The step S1 is to determine the tactical intention space I to the time series feature set VTThe following equation (1):
Figure BDA0003400852640000031
wherein I ═ I (I)1,i2,…,in) For the space of tactical intentions of aerial targets, i.e. { defense, impersonation, attack, reconnaissance, withdrawal, surveillance, electronic interference }, seven types of intentions, VtFor battlefield real-time characteristic information at time t, VTIs t1To tTAnd the function f is a mapping function of the intention type and real-time battlefield characteristic information acquired at each moment.
The step S1 further includes:
step S11: spatial description of tactical intent of a target
Establishing a tactical intention space of an enemy target, wherein the tactical intention space comprises seven intention types of { penetration, impersonation, attack, reconnaissance, retreat, monitoring and electronic interference };
step S12: airborne target tactical intent recognition feature description
The air combat capability factor is also an important factor for determining the degree of the target threat, and for the air combat capability of a warplane, a single-machine air combat capability threat function C is constructed:
C=[lnε1+ln(ε2+1)+ln(∑ε3+1)]ε4ε5ε6ε7 (2)
wherein epsilon1Is the maneuvering characteristic, epsilon, of a warplane2Performance of airborne weapons, epsilon3Detection capability of airborne equipment, epsilon4Basic flight performance, epsilon, of a warplane5Operational Performance of a fighter plane, ε6Operational survivability of fighters, epsilon7Electronic information contrast performance.
The step S2 includes: the BilSTM-Attention model is divided into three parts: the air combat characteristic vector input layer, the hidden layer and the output layer, wherein the hidden layer comprises a BilSTM layer, an Attention layer and a Dense layer.
The step S2 includes:
step S21: calculating an air combat characteristic vector input layer;
step S22: the hidden layer is computed.
The step S21 includes:
step S211: reading the acquired data and cleaning the data;
step S212: carrying out normalization processing on the numerical air combat characteristic data;
carrying out normalization processing on 11 numerical type air combat characteristic data of enemy plane acceleration, enemy plane height, enemy plane speed, enemy plane air combat capability factors, course angle, azimuth angle, own plane acceleration, own plane height, own plane speed, own plane air combat capability factors and distance between the two parties;
for the x-th numerical data Fx=[fx1,fx2,…fxi,…,fxn](x is 1,2, …,11), n is the total number of data, and the ith original data value f in the xthxiMapping to an interval [0, 1 ]]Result of (2) is f'xiThe formula is as follows:
Figure BDA0003400852640000041
in the formula: minFxAs features F of x-dimensionxMinimum value of (d); MaxFxAs features F of x-dimensionxMaximum value of (d);
step S213: encoding non-numerical air combat characteristic data
Encoding and representing the attribute data of 4 states of an empty radar state, a sea-facing radar state, an interference state and an interfered state as 0 and 1, acquiring encoded data of each non-numerical characteristic for two attribute data of a maneuvering type and an enemy type, and then carrying out normalization processing;
step S214: encoding 7 enemy target fighting intention types into category labels;
step S215: randomly initializing data, and dividing a training set and a test set according to 8: 2;
after the operations of steps S211 to S215, the collected air combat feature data is changed into a feature vector form that the hidden layer can directly accept and process.
The step S22 includes:
step S221: computing BilsTM layers
The calculation is performed by the following formula,
Γf=σ(Wf[ht-1,xt]+bf) (4)
Γu=σ(Wu[ht-1,xt]+bu) (5)
Figure BDA0003400852640000042
Figure BDA0003400852640000051
Γo=σ(Wo[ht-1,xt+bo]) (8)
ht=Γo*tanh Ct (9)
the value of the forgetting gate is calculated by formula (4), and the value of the forgetting gate at the time t can be seen from x by the form of formula (4)tAnd ht-1Jointly determining; equation (5) calculates (W) deactivated by sigmoid functionu[ht-1,xt]+bu) A value of a cellular state; formula (6) is calculated by ht-1And xtDetermining a value of the candidate memory cell; calculated by equation (7) ist-1And
Figure BDA0003400852640000052
for gammafAnd ΓuAfter regulation of (c)tA value; equations (8) and (9) calculate that time t is represented by ht-1And xtOutput h of the determined LSTM final hidden state subjected to inner loop and updatet
Wherein x istAn input feature representing time t; ct-1Representing neurons before updating; ctRepresenting the updated neuron; h ist-1And htRespectively representing the output characteristics of the previous moment and the current moment; gamma-shapedf、ΓuAnd ΓoRespectively representing a forgetting gate, an updating gate and an output gate;
Figure BDA0003400852640000053
is a candidate neuron; σ is Sigmoid function. The operation process is as follows, wherein wf、wu、wc、woAnd bf、bu、bc、boA weight coefficient matrix and an offset vector for each respective portion;
hidden layer state of BilSTM at current moment is formed by x input currentlytForward propagating hidden state output at a previous time
Figure BDA0003400852640000054
And the output of the hidden layer state is propagated backwards at the later moment
Figure BDA0003400852640000055
The three parts are determined together, the BilSTM can be seen to be composed of two parts of unidirectional LSTMs, so that the hidden layer state of the BilSTM at the time t can be propagated through the forward direction to form the hidden layer state
Figure BDA0003400852640000056
And backward propagating hidden states
Figure BDA0003400852640000057
The two parts are used for obtaining the results,
the calculation formula is shown as (10), (11) and (12), wherein wi(i ═ 1,2 …, 6) denotes the weight of one cell layer to another.
Figure BDA0003400852640000058
Figure BDA0003400852640000059
Figure BDA0003400852640000061
Step S222: computing the Attention layer
Hidden state s of each initial inputtAnd each hidden layer weight coefficient alphatThe final output state vector Y is obtained by accumulating and summing the products, and the calculation formula is as follows:
et=tanh(wtst+bt) (13)
Figure BDA0003400852640000062
Figure BDA0003400852640000063
in the formula: e.g. of the typetState vector s representing the t-th feature vectortThe determined energy value, wtWeight coefficient matrix representing the t-th eigenvector, btIndicating the offset corresponding to the t-th feature vector, eiMeans of with etSimilarly, the conversion from the input initial state to the new attention state can be realized according to the formula (14), then the finally output state vector Y is obtained through the formula (15), and finally Y and the Dense layer are integrated together to be used as an output value to be input to the final output layer;
step S223: compute output layer
The input of the output layer is the output of an Attention mechanism layer in the hidden layer, the input of the output layer is correspondingly calculated by utilizing a multi-classification Softmax function, so that the classification of the aerial target tactical intention is obtained, and the specific formula is as follows:
yk=softmax(w1Y+b1) (16)
wherein: w is a1A weight coefficient matrix which represents the training from the Attention mechanism layer to the output layer; b1A corresponding bias indicating a need for training; y iskThe label is predicted for the output of the output layer.
The invention has the beneficial effects that: a Bidirectional (Bidirectional) circulation mechanism and an Attention (Attention) mechanism are introduced on the basis of an LSTM-based network, so that the inference process of decision-making personnel for the air combat situation is simulated. Inputting the coded air combat characteristic vector into a BilSTM layer, and compared with the coded air combat characteristic vector put into the LSTM layer, comprehensively making judgment by fully utilizing information of historical time and future time; the output vector of the BilSTM layer is put into the Attention mechanism layer, so that the key information influencing the intention can be further highlighted, and the accuracy of intention identification is improved.
Drawings
FIG. 1 is an aerial target tactical intent inference process;
FIG. 2 is an aerial target tactical intent identification process;
FIG. 3 illustrates the operational intention encoding and pattern parsing;
FIG. 4 is an aerial target tactical intent characterization;
FIG. 5 is a BiLSTM-Attention model;
FIG. 6 is an LSTM structure;
FIG. 7 is a BilSTM structure;
FIG. 8 is an Attention mechanism model;
FIG. 9 shows the accuracy change of each model in the ablation experiment;
FIG. 10 shows the variation of the loss values in the ablation experiment.
Detailed Description
To accurately describe the aerial target tactical intent recognition model, the following assumptions are made: (1) battlefield environment conditions such as air battle terrains, atmospheric layers, climates and the like of the enemy and the my are approximately the same; (2) the enemy air target tactical intent does not change in the extracted time series.
The invention is described in further detail below with reference to the figures and the embodiments.
A tactical intention identification method based on BilSTM-Attention comprises the following steps:
step S1: describing air target intent recognition as a mapping of air war intent recognition features to air war intent types
The air target tactical intention is a process of reasoning the enemy target tactical intention by extracting battlefield environment information, static attributes and real-time dynamic information of air battle enemy and my targets in corresponding space-time domains from real-time and antagonistic environments and combining corresponding military domain knowledge, wherein the process of reasoning the air target tactical intention is shown in figure 1.
Specifically, as the tactical intention identification of the aerial target is carried out under the complex and high-confrontation battlefield conditions, the identified target can deceive decision-making personnel of the party as much as possible and force the party to make wrong judgment, and the single moment characteristic is used for identifying the enemy fighting intention and has larger difference from the actual situation. Therefore, the method has more scientificity for estimating the fighting intention from the characteristic information of the identified target at a plurality of continuous moments.
Determining tactical intent space I to time series feature set VTThe following equation (1):
Figure BDA0003400852640000081
wherein I ═ I (I)1,i2,…,in) Seven intention types, V, for aerial target tactical intention space, i.e. { defense, impersonation, attack, reconnaissance, retreat, surveillance, electronic interference } in the followingtThe real-time characteristic information of the battlefield at the moment t. VTIs t1To tTThe feature sets at T consecutive moments constitute a time-series feature set,
due to high antagonism, uncertainty, complexity and the like of the air combat, the mapping relation from the tactical intention type to the time sequence characteristic set is difficult to be induced and deduced through a mathematical formula. The invention trains the BilSTM-Attention network structure by using the air war data set, thereby establishing the mapping relation from the tactical intention type to the time sequence characteristic set, and the whole air war intention identification process is shown as figure 2.
As shown in fig. 2, in the process of identifying the tactical intention of the aerial target, firstly, the intention type of the historical data is calibrated to obtain a complete training data set, and then the preprocessed data set is input into a BilSTM-Attention network to train to obtain the mapping relation between the intention type of the aerial target and the time sequence characteristic set; in actual space battle, continuous N moments (T) are acquired in real time through a sensorn~Tn+N) And finally, integrating and coding the acquired target state information, and inputting the integrated and coded target state information into a trained target intention recognition model to obtain a target intention recognition result, wherein the specific process comprises the following steps:
step S11: spatial description of tactical intent of a target
The target tactical intention space has different intention spaces for different combat forms, different scenes and different enemy entities. Therefore, it is necessary to define an appropriate tactical intention space according to the corresponding operational situation.
The invention aims at that the battle of an unmanned aerial vehicle in a certain airspace is taken as a research object, and the tactical intention space of an enemy target is established to comprise seven intention types of { sudden defense, impersonation attack, reconnaissance, retreat, monitoring and electronic interference }.
After the intention space is established, how to convert the cognitive mode of the human into the label trained by the intelligent model and corresponding to the intention type in the tactical intention space is the key to apply the BilSTM-Attention model to tactical intention identification. The process that a decision-maker infers the tactical intention of an enemy target is analyzed, the decision-maker obtains battlefield situation information and then makes judgment on the enemy target intention by combining self experience is difficult to express explicitly, but the human cognitive experience is often hidden in the process of reasoning the target fighting intention of an enemy. Therefore, the cognitive experience of the decision-maker can be packaged into labels to train the BilSTM-Attention model.
The 7 enemy target fighting intention types established in the invention are coded by the corresponding fighting intention type and the mode analysis mechanism as shown in FIG. 3. For example: if the intention recognition result output by the BilSTM-Attention model is 4, the fighting intention of the enemy object can be considered as the monitoring intention. Therefore, the cognitive experience of the decision maker can be expressed simply and clearly by coding the enemy combat intention, and the model is easier to train.
Step S12: airborne target tactical intent recognition feature description
The tactical intention of the enemy target is highly related to the threat degree of the enemy and the battle mission of the enemy and the my party, for example, when the threat degree of the enemy is far greater than that of the enemy to the enemy, the possibility that the tactical intention of the enemy target is 'attack' is greatly reduced. Therefore, in order to identify different threat levels and combat missions of both the enemy and the my, different air combat characteristics need to be extracted.
From the perspective of threat degree, factors influencing the magnitude of the target threat degree are many, and the method mainly considers the distance, the speed, the angle and the flight acceleration of the two parties of the enemy and the my. The air combat capability factor is also an important factor for determining the degree of the target threat, and for the air combat capability of a warplane, a single-machine air combat capability threat function C is constructed:
C=[lnε1+ln(ε2+1)+ln(∑ε3+1)]ε4ε5ε6ε7 (2)
wherein epsilon1~ε7Respectively representing the maneuvering performance, the onboard weapon performance, the onboard equipment detection capability, the basic flight performance of the fighter, the operational survival performance of the fighter and the electronic information countermeasure performance.
The air combat capability threat is the inherent performance of the fighter, so that air combat capability factors of various fighters of both the enemy and the my are calculated according to a formula (2) and stored in a database, and the database data is updated in real time according to the mastering of equipment development conditions of our parties.
From the perspective of a combat mission, when an enemy fighter plane executes a certain combat mission, certain characteristic information of the fighter plane must meet certain conditions, for example, a fighter plane is generally close to an enemy target at a high speed during air combat attack, and the flight speed of the fighter plane is generally 735-1470 km/h; the task of the penetration defense is divided into low-altitude penetration defense and high-altitude penetration defense, and the corresponding heights are 50-200 m and 1000-11000 m. The state of the target radar signal is also linked to the mission, for example, the air radar is usually kept on during the air combat, and the air radar and the sea radar are kept on during the reconnaissance mission. Different types of fighters have different application values and tactical meanings, the fighter has stronger aggressivity, and the reconnaissance capability of the reconnaissance plane is strong, so that the type of the enemy plane can also be used as a tactical intention identification characteristic.
Furthermore, the achievement of the aerial target tactical intent is closely related to the maneuver of the fighter. There are two types of designs for a common library of maneuvers: one is a typical tactical action library designed based on typical air combat tactical airplane actions, and the other is a basic control action library designed based on air combat basic control actions. Because the invention researches the time sequence characteristics, tactical intention identification is carried out by collecting target characteristic information of 12 frames, the control algorithm of a typical tactical action library is complex to solve, and the exit and conversion time points of actions are difficult to determine, the invention adopts a basic operation action library. The library is provided by national aeronautics advisory board (NASA) scholars in the united states according to the most common maneuver modes in air war, and mainly comprises { maximum acceleration, maximum deceleration, maximum overload climbing, maximum overload diving, maximum overload right turn, maximum overload left turn, stable typing }7 maneuver, but the maneuver modes combined by the seven maneuver modes are not enough, and the limit maneuvers are obviously not in accordance with the actual air war. The invention selects 11 improved basic operation actions, including { uniform speed forward flight, deceleration forward flight, acceleration forward flight, climbing, right climbing, left climbing, diving, right diving, left diving, right turning and left turning }.
In summary, the feature set for identifying the air target tactical intention is 17-dimensional feature vectors, wherein the feature vectors are { an air radar state, a sea radar state, an interference state, an interfered state, a maneuvering type, an enemy airplane type, enemy airplane acceleration, enemy airplane height, enemy airplane speed, enemy airplane air combat capability factors, a course angle, an azimuth angle, my airplane acceleration, my airplane height, my airplane speed, my airplane air combat capability factors and two-party distance, and a feature description diagram is shown in fig. 4 and can be divided into numerical features and non-numerical features.
Step S2: establishing a BiLSTM-Attention-based air war intention identification model
The BilSTM-Attention model is divided into three parts: the air combat characteristic vector comprises an air combat characteristic vector input layer, a hidden layer and an output layer. Wherein, the hidden layer comprises a BilSTM layer, an Attention layer and a Dense layer. Wherein, the structure of the BilSTM-Attention model is shown in FIG. 5.
Step S21: input layer for calculating air combat characteristic vector
The air combat characteristic vector input layer of the invention is mainly used for preprocessing the collected air combat characteristic data sets, namely, the data sets are processed into a characteristic vector form which can be directly received and processed by a BilSTM layer, and the air combat characteristic vectorization specifically comprises the following operation steps:
step S211: reading collected data and performing data cleaning
Step S212: normalizing the numerical air combat characteristic data
The data normalization can eliminate the influence of data dimension and improve the network convergence efficiency, and 11 kinds of numerical air combat characteristic data including enemy plane acceleration, enemy plane height, enemy plane speed, enemy plane air combat capability factors, course angle, azimuth angle, I plane acceleration, I plane height, I plane speed, I plane air combat capability factors and distance between two parties are subjected to normalization processing.
For the x-th numerical data Fx=[fx1,fx2,…fxi,…,fxn](x ═ 1,2, …, 11); n is the total number of data. The ith original data value f in the xxiMapping to an interval [0, 1 ]]Result of (2) is f'xiThe formula is as follows:
Figure BDA0003400852640000121
in the formula: minFxAs features F of x-dimensionxMinimum value of (d); MaxFxAs features F of x-dimensionxIs measured.
Step S213: encoding non-numerical air combat characteristic data
The attribute data codes of 4 for an empty radar state, a sea radar state, an interference state and an interfered state are represented as 0 and 1. For example, a 0 in the empty radar state indicates that the radar is in the off state, and a 1 indicates that the radar is in the on state. And for the two attribute data of the maneuvering type and the enemy type, coding data of each non-numerical characteristic is obtained, and then normalization processing is carried out.
Step S214: the 7 types of enemy target engagement intention are encoded into category labels.
Step S215: data was randomly initialized, training set and test set were divided by 8: 2.
After the operations of steps S211 to S215, the collected air combat feature data is changed into a feature vector form that the hidden layer can directly accept and process.
Step S22: computing hidden layers
Step S221: computing BilsTM layers
The long-short term memory Network is used as a special Recurrent Neural Network (RNN) and also has a recursive structure similar to the RNN, but different from a simple RNN, the LSTM simulates a forgetting mechanism and a memory mechanism of a human brain by introducing a gating switch idea, so that the problems of gradient elimination and gradient explosion in the long sequence training process are solved.
The input and output of information are realized by a forgetting gate, an updating gate, an output gate and a memory unit in the LSTM architecture, and the single-neuron architecture of the LSTM is shown in FIG. 6.
In the figure: x is the number oftAn input feature representing time t; c. Ct-1Representing neurons before updating; c. CtRepresenting the updated neuron; h ist-1And htRespectively representing the output characteristics of the previous moment and the current moment;Γf、Γuand ΓoRespectively representing a forgetting gate, an updating gate and an output gate;
Figure BDA0003400852640000122
is a candidate neuron; σ is Sigmoid function. The operation process is as follows, wherein wf、wu、wc、woAnd bf、bu、bc、boA matrix of weight coefficients and an offset vector for each respective portion. The formula (4) calculates the value of the forgetting gate, and the information can be retained, and the formula (4) can show that the value of the forgetting gate at the time t is xtAnd ht-1Jointly determining; equation (5) calculates (W) deactivated by sigmoid functionu[ht-1,xt]+bu) A value of a cellular state; formula (6) is calculated by ht-1And xtDetermining a value of the candidate memory cell; calculated by equation (7) ist-1And
Figure BDA0003400852640000131
for gammafAnd ΓuAfter regulation of (c)tA value; the formula (8) and the formula (9) calculate that the time t is ht-1And xtOutput h of the determined LSTM final hidden state subjected to inner loop and updatet
Γf=σ(Wf[ht-1,xt]+bf) (4)
Γu=σ(Wu[ht-1,xt]+bu) (5)
Figure BDA0003400852640000132
Figure BDA0003400852640000133
Γo=σ(Wo[ht-1,xt+bo]) (8)
ht=Γo*tanh Ct (9)
The traditional LSTM network is a unidirectional neural network structure, the acquired information is historical information before the current time, so that future information is ignored, and the BilTM network consists of a forward LSTM network and a backward LSTM network, has the function of capturing the characteristics of the information before and after, and has a model structure as shown in FIG. 7.
As can be seen from FIG. 7, BilSTM is hidden in the layer state O at time ttCan pass through the forward hidden layer state
Figure BDA0003400852640000134
And backward hidden layer state
Figure BDA0003400852640000135
Two part solution, forward hidden state
Figure BDA0003400852640000136
By currently entered xtAnd hidden state of forward direction at time (t-1)
Figure BDA0003400852640000137
Determining a backward hidden layer state
Figure BDA0003400852640000138
By currently entered xtAnd hidden layer state backward from time (t +1)
Figure BDA0003400852640000139
And (6) determining. The calculation formula is shown as (10), (11) and (12), wherein wi(i ═ 1,2 …, 6) denotes the weight of one cell layer to another.
Figure BDA00034008526400001310
Figure BDA00034008526400001311
Figure BDA0003400852640000141
Step S222: computing the Attention layer
The Attention mechanism is similar to a brain signal processing mechanism specific to human vision, and highlights the characteristics which occupy a larger ratio to a prediction result by calculating the weight of a feature vector output in a BilSTM network at different moments, so that the whole neural network model shows better performance. The Attention mechanism is excellent in ordinal data such as machine translation, speech recognition and the like, has good effect in classification prediction, can be used independently, and can also be used as a layer of other mixed models in other mixed models. In the air target tactical intention identification, a neural network focuses on some key features through an Attention mechanism in a training process, the core of the neural network is a weight coefficient, the importance degree of each feature is firstly learned, and then corresponding weights are distributed to each feature according to the importance degree, for example, when an enemy is an attack intention, the heading angle, the maneuvering type and other features can be distributed with more weights by the Attention mechanism to deepen model memory. The basic structure of the Attention mechanism model is shown in FIG. 8. O istInputting the t-th feature vector output by the BilSTM network into an attention mechanism hiding layer to obtain an initial state vector stThen is summed with the weight coefficient alphatAnd correspondingly multiplying, accumulating and summing to obtain the final output state vector Y. The calculation formula is as follows:
et=tanh(wtst+bt) (13)
Figure BDA0003400852640000142
Figure BDA0003400852640000143
in the formula: e.g. of the typetState vector s representing the t-th feature vectortThe determined energy value; w is atA weight coefficient matrix representing the t-th eigenvector; btIndicating the offset corresponding to the t-th feature vector. The conversion from the input initial state to the new attention state can be realized according to the formula (14), then the finally output state vector Y is obtained through the formula (15), and finally Y and the Dense layer are integrated together to be input to the final output layer as an output value.
Step S223: compute output layer
The input of the output layer is the output of the Attention mechanism layer in the hidden layer. Correspondingly calculating the input of the output layer by utilizing a multi-classification Softmax function to obtain the classification of the tactical intention of the aerial target, wherein the specific formula is as follows:
yk=softmax(w1Y+b1) (16)
wherein: w is a1A weight coefficient matrix which represents the training from the Attention mechanism layer to the output layer; b1A corresponding bias indicating a need for training; y iskThe label is predicted for the output of the output layer.
And comparing various models with the model provided by the invention, and taking the accuracy, the loss value, the recall rate, the precision rate and the F1-score as evaluation indexes to prove the effectiveness of the model provided by the invention in the aspect of identifying the aerial target tactical intention. The method comprises the following specific steps:
selection of experimental data sets: the experiment uses the unmanned aerial vehicle fight in a certain airspace as the research background, and the experimental data comes from a certain fight simulation system. A simulation system is operated for multiple times to obtain multiple air combat intention modes, 10000 air combat intention samples are randomly extracted from the air combat intention modes, and continuous 12 frames of information (each frame of information comprises 17 dimensional characteristic information such as a course angle, a flight altitude, an interference state and a radar state) are collected aiming at each sample. And finally, revising the sample data with the meaning classification ambiguity by the air war field expert according to the air war experience. Wherein the data set comprises 7 target tactical intentions, and the proportion of each tactical intention data is 21.6% of attack intention, 20.0% of defense intention, 19.8% of reconnaissance intention, 12.9% of monitoring intention, 10.0% of hypothesis attack intention, 9.25% of electronic interference intention and 6.45% of withdrawal intention. The sample size was 10000, where training set and test set were divided by 8:2, so training set sample size was 8000 and test set sample size was 2000.
Experimental setup
The accuracy and the loss value are used as evaluation indexes of the experiment. The experiment is carried out in a Python language under a GPU acceleration environment, a Keras deep learning framework is adopted, and a computer is configured into a Win10 system, a GTX960M video card and an 8GB memory. In the experiment, a plurality of hyper-parameters need to be set and adjusted, and after a plurality of experiments, the hyper-parameters are adjusted according to the accuracy and the loss value, so that the parameter setting of the experiment shown in the table I is obtained.
TABLE 1 model Experimental parameters
Figure BDA0003400852640000161
Analysis of Experimental results
BiLSTM-Attention model recognition result analysis
The BilSTM-Attention model is trained and then 20% of samples are tested, experiments show that the accuracy of the network model provided by the invention reaches 97.3%, a confusion matrix of a data set is manufactured for further observing the relation between recognition intents, and diagonal lines represent the number of samples which are correctly recognized, as shown in Table 2.
TABLE 2 intention recognition confusion matrix
Figure BDA0003400852640000162
Table 2 shows the confusion matrix for the test set intent recognition obtained by training a sample using the model of the present invention. As can be seen from the table, the model has higher recognition accuracy for all 7 types of intention recognition, and particularly, the recognition accuracy of the withdrawal intention can reach 100%. The method has the advantages that a few parts of attack intentions are wrongly identified as the pretended attack intentions, a few parts of reconnaissance intentions and monitoring intentions are mutually identified wrongly, the air combat characteristics corresponding to the two intentions are high in similarity and strong in intention deception, the BilSTM neural network cannot ensure that the trained model weights are obviously different between identification of the tactical intentions, so that the final attention mechanism layer cannot accurately sense the weight difference between the two intentions, and the condition that a few parts of intentions are mutually identified wrongly is caused, and the actual condition is met.
Comparative experiment with LSTM, SAE, BP, MLP and SVM models
In the experiment, the highest accuracy on the test set is selected as the accuracy of the model in the 100 iteration processes, and the corresponding loss value is the loss value of the model. The model BiLSTM-Attention provided by the invention is respectively used with a stack type self-encoder (SAE) tactical intention intelligent identification model; identifying a model of battlefield versus enemy target tactical intention based on LSTM; a BP neural network aerial target combat intention identification model optimized by using a ReLU function and an Adam algorithm; the specific results of the comparative experiments performed by the conventional multi-classification model Support Vector Machine (SVM) and the multi-layer perceptron (MLP) are shown in table 3:
TABLE 3 recognition results of different model intentions
Figure BDA0003400852640000171
From table 3, it can be seen that the model BiLSTM-Attention provided by the present invention is more excellent than the other five models in both accuracy and loss value, and the accuracy rate can be improved by about 30% compared with the two traditional machine learning methods SVM and MLP, about 20% compared with the traditional neural network method, and 2.8% compared with the basic LSTM model, thereby verifying the effectiveness of the model provided by the present invention for identifying the tactical intention of the aerial target. Further analysis, the time sequence characteristic network model based on the RNN, namely the LSTM and the BilSTM-Attention, is more suitable for identifying the tactical intention of the aerial target than other models, and further shows that the intention is more scientific when judged according to the time sequence characteristic change.
Model ablation experiment
Although the comparison experiment of the BilSTM-Attention model with LSTM, SAE, BP, MLP and SVM models fully shows that the BilSTM-Attention model has the characteristics of high accuracy and low loss rate, and can accurately identify the tactical intention of the aerial target. However, the method does not belong to the comparison of mixed experimental models of different types, and lacks certain experimental persuasion. Therefore, model ablation experiments were performed on the same dataset, and the experimental results are shown in table 4, fig. 9, and fig. 10:
it can be seen from Table 4 that the accuracy of the model provided by the present invention can be as high as 97.3%, and compared with LSTM, LSTM-Attention and BiLSTM models, the accuracy is improved by 2.8%, 1.5% and 1.1%. The loss values of the proposed model are also lower than those of the other three models. From the analysis of the ablation experiment accuracy and the loss value variation of fig. 9 and 10, the accuracy is generally improved and the loss value is reduced continuously in the four models as the training turns are increased when the left side and the right side are seen, wherein the BilSTM-Attention model is always superior to the other three models, the accuracy and the loss value of the BilSTM-Attention and the BilSTM are obviously superior to the other two models after the initial training is not long, the analysis shows that the two-way propagation mechanism can effectively improve the training effect, and the neural network model can be learned more quickly under the conditions of the same batch size, the same learning rate and the same training turns. The curves of the BiLSTM and the LSTM-Attention model are relatively close and are obviously superior to those of the LSTM model, and the LSTM model is obviously improved by introducing a bidirectional propagation mechanism and an Attention mechanism.
Three model evaluation indexes of accuracy rate (the proportion of the number of positive samples with correct intention identification to the number of positive samples determined by an intention identifier), recall rate (the proportion of the number of positive samples with correct intention identification to the number of actual positive samples), F1-score (the harmonic mean of accuracy rate and recall rate) and accuracy rate (the proportion of the number of samples with correct intention identification to the number of total samples) are introduced, and model superiority is further verified.
Table 4 model ablation experimental results
Figure BDA0003400852640000181
Figure BDA0003400852640000191
TABLE 5 intent identification Performance metrics
Figure BDA0003400852640000192
Tables 5 (first), second, third, fourth) represent BiLSTM-Attention, BiLSTM, LSTM-Attention, LSTM air combat target tactical intention identification models, respectively. It can be seen from the table that the four models are relatively low in recognition rate for impersonation and monitoring intentions and high in recognition rate for withdrawal intentions. The results of the three evaluation indexes of the BilSTM and the LSTM-Attention model have smaller difference, but are obviously better than the results of the LSTM, and the BilSTM-Attention model is fully superior in each evaluation index, so that the model provided by the invention can identify the aerial target tactical intention with higher precision.
The method analyzes the characteristics of the problem of identifying the tactical intention of the air combat target, adopts a layering strategy to select the total 17-dimensional air combat characteristics from two aspects of threat degree and combat mission, divides the characteristics into numerical data and non-numerical data to be respectively coded and normalized, and encapsulates the cognitive experience of decision-makers as labels. The invention also provides a BiLSTM-Attention air combat target tactical intention recognition model utilizing the variation of the air target time sequence characteristics, which utilizes a BiLSTM neural network to fully learn the continuous 12-frame air combat characteristic information, extracts more deep characteristics, and then utilizes an Attention mechanism to distribute different weights to the characteristics so as to achieve the purpose of more accurately recognizing the intention. Compared with other models, the model learning speed is high and the identification accuracy is high. The model of the invention has the defects of high similarity of air combat characteristics and strong cheating intention identification, and how the intention should be identified if changes occur in the detected time series, which is the key point of the next research.

Claims (7)

1. A tactical intention identification method based on BilSTM-Attention is characterized by comprising the following steps:
step S1: describing the air target intention identification as a mapping of air war intention identification characteristics to air war intention types;
step S2: and establishing a BiLSTM-Attention-based air war intention identification model.
2. The BilSTM-Attention-based tactical intention identification method of claim 1, wherein: the step S1 is to determine the tactical intention space I to the time series feature set VTThe following equation (1):
Figure FDA0003400852630000011
wherein I ═ I (I)1,i2,…,in) For the space of tactical intentions of aerial targets, i.e. { defense, impersonation, attack, reconnaissance, withdrawal, surveillance, electronic interference }, seven types of intentions, VTIs t1To tTAnd the function f is a mapping function of the intention type and real-time battlefield characteristic information acquired at each moment.
3. The BilSTM-Attention-based tactical intention identification method of claim 2, wherein said step S1 further comprises:
step S11: spatial description of tactical intent of a target
Establishing a tactical intention space of an enemy target, wherein the tactical intention space comprises seven intention types of { penetration, impersonation, attack, reconnaissance, retreat, monitoring and electronic interference };
step S12: airborne target tactical intent recognition feature description
The air combat capability factor is also an important factor for determining the degree of the target threat, and for the air combat capability of a warplane, a single-machine air combat capability threat function C is constructed:
C=[lnε1+ln(ε2+1)+ln(∑ε3+1)]ε4ε5ε6ε7 (2)
wherein epsilon2Performance of airborne weapons, epsilon3Detection capability of airborne equipment, epsilon4Basic flight performance, epsilon, of a warplane5Operational Performance of a fighter plane, ε6Operational survivability of fighters, epsilon7Electronic information contrast performance.
4. The BilSTM-Attention-based tactical intention identification method of claim 1, wherein said step S2 comprises: the BilSTM-Attention model is divided into three parts: the air combat characteristic vector input layer, the hidden layer and the output layer, wherein the hidden layer comprises a BilSTM layer, an Attention layer and a Dense layer.
5. The BilSTM-Attention-based tactical intention identification method of claim 2, wherein said step S2 comprises:
step S21: calculating an air combat characteristic vector input layer;
step S22: the hidden layer is computed.
6. The BilSTM-Attention-based tactical intention identification method of claim 5, wherein said step S21 comprises:
step S211: reading the acquired data and cleaning the data;
step S212: carrying out normalization processing on the numerical air combat characteristic data;
carrying out normalization processing on 11 numerical type air combat characteristic data of enemy plane acceleration, enemy plane height, enemy plane speed, enemy plane air combat capability factors, course angle, azimuth angle, own plane acceleration, own plane height, own plane speed, own plane air combat capability factors and distance between the two parties;
for the x-th numerical data Fx=[fx1,fx2,…fxi,…,fxn](x is 1,2, …,11), n is the total number of data, and the ith original data value f in the xthxiMapping to an interval [0, 1 ]]Result of (2) is f'xiThe formula is as follows:
Figure FDA0003400852630000021
in the formula: minFxAs features F of x-dimensionxMinimum value of (d); MaxFxAs features F of x-dimensionxMaximum value of (d);
step S213: encoding non-numerical air combat characteristic data
Encoding and representing the attribute data of 4 states of an empty radar state, a sea-facing radar state, an interference state and an interfered state as 0 and 1, acquiring encoded data of each non-numerical characteristic for two attribute data of a maneuvering type and an enemy type, and then carrying out normalization processing;
step S214: encoding 7 enemy target fighting intention types into category labels;
step S215: randomly initializing data, and dividing a training set and a test set according to 8: 2;
after the operations of steps S211 to S215, the collected air combat feature data is changed into a feature vector form that the hidden layer can directly accept and process.
7. The BilSTM-Attention-based tactical intention identification method of claim 5, wherein said step S22 comprises:
step S221: computing BilsTM layers
The calculation is performed by the following formula,
Γf=σ(Wf[ht-1,xt]+bf) (4)
Γu=σ(Wu[ht-1,xt]+bu) (5)
Figure FDA0003400852630000031
Figure FDA0003400852630000032
Γo=σ(Wo[ht-1,xt+bo]) (8)
ht=Γo*tanhCt (9)
the value of the forgetting gate is calculated by formula (4), and the value of the forgetting gate at the time t can be seen from x by the form of formula (4)tAnd ht-1Jointly determining; equation (5) calculates (W) deactivated by sigmoid functionu[ht-1,xt]+bu) A value of a cellular state; formula (6) is calculated by ht-1And xtDetermining a value of the candidate memory cell; calculated by equation (7) ist-1And
Figure FDA0003400852630000033
for gammafAnd ΓuAfter regulation of (c)tA value; equations (8) and (9) calculate that time t is represented by ht-1And xtOutput h of the determined LSTM final hidden state subjected to inner loop and updatet
Wherein x istAn input feature representing time t; ct-1Representing neurons before updating; ctRepresenting the updated neuron; h ist-1And htRespectively representing the output characteristics of the previous moment and the current moment; gamma-shapedf、ΓuAnd ΓoRespectively representing a forgetting gate, an updating gate and an output gate;
Figure FDA0003400852630000041
is a candidate neuron; σ is Sigmoid function. The operation process is as follows, wherein wf、wu、wc、woAnd bf、bu、bc、boA weight coefficient matrix and an offset vector for each respective portion;
hidden layer state of BilSTM at current moment is formed by x input currentlytForward propagating hidden state output at a previous time
Figure FDA0003400852630000042
And the output of the hidden layer state is propagated backwards at the later moment
Figure FDA0003400852630000043
The three parts are determined together, the BilSTM can be seen to be composed of two parts of unidirectional LSTMs, so that the hidden layer state of the BilSTM at the time t can be propagated through the forward direction to form the hidden layer state
Figure FDA0003400852630000044
And backward propagating hidden states
Figure FDA0003400852630000045
The two parts are used for obtaining the results,
the calculation formula is shown as (10), (11) and (12), wherein wi(i ═ 1,2 …, 6) denotes the weight of one cell layer to another.
Figure FDA0003400852630000046
Figure FDA0003400852630000047
Figure FDA0003400852630000048
Step S222: computing the Attention layer
Hidden state s of each initial inputtAnd each hidden layer weight coefficient alphatThe final output state vector Y is obtained by accumulating and summing the products, and the calculation formula is as follows:
et=tanh(wtst+bt) (13)
Figure FDA0003400852630000049
Figure FDA00034008526300000410
in the formula: e.g. of the typetState vector s representing the t-th feature vectortThe determined energy value, wtWeight coefficient matrix representing the t-th eigenvector, btIndicating the offset corresponding to the t-th feature vector, eiMeans of with etSimilarly, the conversion from the input initial state to the new attention state can be realized according to the formula (14), then the finally output state vector Y is obtained through the formula (15), and finally Y and the Dense layer are integrated together to be used as an output value to be input to the final output layer;
step S223: compute output layer
The input of the output layer is the output of an Attention mechanism layer in the hidden layer, the input of the output layer is correspondingly calculated by utilizing a multi-classification Softmax function, so that the classification of the aerial target tactical intention is obtained, and the specific formula is as follows:
yk=softmax(w1Y+b1) (16)
wherein: w is a1A weight coefficient matrix which represents the training from the Attention mechanism layer to the output layer; b1A corresponding bias indicating a need for training; y iskThe label is predicted for the output of the output layer.
CN202111496364.XA 2021-03-29 2021-12-09 Tactical intention identification method based on BilSTM-Attention Pending CN114266355A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110335495 2021-03-29
CN2021103354953 2021-03-29

Publications (1)

Publication Number Publication Date
CN114266355A true CN114266355A (en) 2022-04-01

Family

ID=80826609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111496364.XA Pending CN114266355A (en) 2021-03-29 2021-12-09 Tactical intention identification method based on BilSTM-Attention

Country Status (1)

Country Link
CN (1) CN114266355A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115659229A (en) * 2022-12-27 2023-01-31 四川迪晟新达类脑智能技术有限公司 Low, small and slow target threat degree evaluation method and device
CN115952909A (en) * 2022-12-31 2023-04-11 中国电子科技集团公司信息科学研究院 Target threat estimation method and device based on combined empowerment and LSTM
CN116029379A (en) * 2022-12-31 2023-04-28 中国电子科技集团公司信息科学研究院 Method for constructing air target intention recognition model
CN116383731A (en) * 2023-03-06 2023-07-04 南京航空航天大学 Tactical maneuver identification method, tactical maneuver identification system, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115659229A (en) * 2022-12-27 2023-01-31 四川迪晟新达类脑智能技术有限公司 Low, small and slow target threat degree evaluation method and device
CN115952909A (en) * 2022-12-31 2023-04-11 中国电子科技集团公司信息科学研究院 Target threat estimation method and device based on combined empowerment and LSTM
CN116029379A (en) * 2022-12-31 2023-04-28 中国电子科技集团公司信息科学研究院 Method for constructing air target intention recognition model
CN116029379B (en) * 2022-12-31 2024-01-02 中国电子科技集团公司信息科学研究院 Method for constructing air target intention recognition model
CN116383731A (en) * 2023-03-06 2023-07-04 南京航空航天大学 Tactical maneuver identification method, tactical maneuver identification system, electronic equipment and storage medium
CN116383731B (en) * 2023-03-06 2023-11-14 南京航空航天大学 Tactical maneuver identification method, tactical maneuver identification system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114266355A (en) Tactical intention identification method based on BilSTM-Attention
US20220197281A1 (en) Intelligent decision-making method and system for unmanned surface vehicle
CN111240353B (en) Unmanned aerial vehicle collaborative air combat decision method based on genetic fuzzy tree
Hu et al. Application of deep reinforcement learning in maneuver planning of beyond-visual-range air combat
Xue et al. Panoramic convolutional long short-term memory networks for combat intension recognition of aerial targets
CN112598046B (en) Target tactical intent recognition method in multi-machine cooperative air combat
Teng et al. An air target tactical intention recognition model based on bidirectional GRU with attention mechanism
Teng et al. A GRU‐Based Method for Predicting Intention of Aerial Targets
CN114818853B (en) Intention recognition method based on bidirectional gating circulating unit and conditional random field
CN115130357A (en) GRU-based air target combat intention prediction system and method
CN112749761A (en) Enemy combat intention identification method and system based on attention mechanism and recurrent neural network
CN116186275A (en) Space-time knowledge graph construction, calculation and perception method and device for battlefield situation
CN112305913A (en) Multi-UUV collaborative dynamic maneuver decision method based on intuitive fuzzy game
CN115293022A (en) Aviation soldier intelligent agent confrontation behavior modeling method based on OptiGAN and spatiotemporal attention
Wang et al. Tactical intention recognition method of air combat target based on BiLSTM network
CN112926739A (en) Network countermeasure effectiveness evaluation method based on neural network model
Wang et al. Learning embedding features based on multisense-scaled attention architecture to improve the predictive performance of air combat intention recognition
CN115661576A (en) Method for identifying airplane group intention under sample imbalance
CN115909027B (en) Situation estimation method and device
Filgöz et al. Applying novel adaptive activation function theory for launch acceptability region estimation with neural networks in constrained hardware environments: Performance comparison
Mulgund et al. Olipsa: On-line intelligent processor for situation assessment
Buz et al. A novel approach and application of time series to image transformation methods on classification of underwater objects
Siyuan et al. STABC-IR: An air target intention recognition method based on bidirectional gated recurrent unit and conditional random field with space-time attention mechanism
Zhang et al. Combat Intention Recognition of Air Targets Based on 1DCNN-BiLSTM
Lv et al. Threat assessment of cognitive electronic warfare to communication based on self-organizing competitive neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination