CN114297529A - Moving cluster trajectory prediction method based on space attention network - Google Patents
Moving cluster trajectory prediction method based on space attention network Download PDFInfo
- Publication number
- CN114297529A CN114297529A CN202111629230.0A CN202111629230A CN114297529A CN 114297529 A CN114297529 A CN 114297529A CN 202111629230 A CN202111629230 A CN 202111629230A CN 114297529 A CN114297529 A CN 114297529A
- Authority
- CN
- China
- Prior art keywords
- combat
- entity
- target
- entities
- surrounding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a mobile cluster trajectory prediction method based on a spatial attention network, and relates to the technical field of trajectory prediction. The method for predicting the moving cluster track based on the space attention network researches the track and interaction of each entity in all the fighting units from two aspects of time and space. For the spatial layer, the motion state of surrounding combat entities is acquired through the spatial LSTM, and vectors describing local spatial features are extracted for capturing the interaction between the combat entities. And then, taking the spatial feature vector as the input of the attention module, and calculating the influence degree of the motion modes of the surrounding combat entities on the trajectories of the combat entities. The attention LSTM is then used to derive the motion characteristics of the surrounding combat entities. For the time horizon, the motion characteristics of surrounding combat entities and the state of the target combat entity are used as input to the time LSTM to describe the variation of the trajectory of the combat entity over time.
Description
Technical Field
The invention relates to the technical field of trajectory prediction, in particular to a method for predicting a trajectory of a mobile cluster based on a spatial attention network.
Background
In the integrated intelligent cooperative combat network, the ground combat cluster can continuously change the position according to the actual combat situation, and meanwhile, the dynamic movement of the enemy combat cluster needs to be accurately predicted in order to ensure accurate attack. In addition, the relay of the unmanned aerial vehicle can provide emergency communication connection in an area without infrastructure coverage, and in order to ensure that communication coverage is provided for a ground combat cluster, the unmanned aerial vehicle cluster needs to adjust the air position of the unmanned aerial vehicle cluster according to the moving track of a ground entity, so that subsequent services such as data transmission are realized. To meet the above requirements, it is necessary to predict the mobile positions of the ground entity and the air entity.
The existing track prediction methods are divided into two types, one type is that a model outputs a plurality of possible track sequences and probabilities thereof, and the predefined entity motion type is consistent with the intuition of a driver and has interpretability. The other type of track prediction method is that a single track sequence is output by a model, so that prediction errors caused by incorrect motion type division are avoided, and the method is more suitable for complex scenes.
In the scene of fighting, multiple entities such as unmanned aerial vehicle, war chariot, infantry coexist, and the entity of fighting of different grade type has different motion pattern, including size, speed and turning radius etc.. Therefore, the scene has higher requirements on the accuracy of the future behavior planning of each combat entity.
The trajectory prediction is a typical time series analysis problem, many researchers have made a lot of research, however, the existing method has two disadvantages, firstly, because each mobile entity can change the movement behavior of other surrounding mobile entities, how to effectively combine the spatial effect and the temporal effect of the mobile entity is an urgent problem to be solved, secondly, the existing method ignores that the interaction influence between the mobile entities is different due to different factors such as distance, and the accuracy of the trajectory prediction is reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a mobile cluster trajectory prediction method based on a space attention network. And constructing a real-time track prediction model based on the position and state information of the ground battle cluster and the unmanned aerial vehicle cluster. And modeling the movement behaviors of different types of entities by processing all entity data, and excavating heterogeneous entity movement characteristics. The model exploits LSTM to mine the interaction between neighboring entities and introduces an attention mechanism to compute the extent of the interaction, aggregating the movement characteristics of the different entities. According to the model, the position sequence of the model in a future period of time is predicted according to the historical state data of the ground battle cluster and the unmanned aerial vehicle cluster, and a data basis is provided for cooperative combat communication and accurate striking.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a mobile cluster trajectory prediction method based on a spatial attention network comprises the following steps:
step 1: dividing all mobile units in the combat space into target combat entities and surrounding combat entities;
the target combat entity is an entity with a predicted movement track;
the surrounding combat entity is other entities which run around the target combat entity and can influence the advancing direction of the target combat entity;
step 2: constructing the motion state f of the fighting entity according to the motion states, namely the position, the size and the type, of the target fighting entity and the surrounding target entitiesi tAnd spatial relationshipAnd form a sequence of motion states of the target combat entityAnd the spatial relationship sequence between the entity and the surrounding combat entity
The state of motion fi tFighting entities a for a targetiThe motion state at time t, including position, size, and type, is recorded asWhereinAndrespectively representing the x-and y-coordinates, l, of the target combat entity at time tiAnd wiRespectively representing the length and width of the target combat entity, ciClass representing target combat entity, ciE {1,2,3}, wherein 1 represents an unmanned aerial vehicle, 2 represents a combat tank, and 3 represents an infantry;
the spatial relationshipFighting the target at the moment t with the entity aiAnd surrounding combat entity ajThe spatial characteristics between are defined asWherein the content of the first and second substances,for indicating the positional relationship between two combat entities, cijIs a unique code corresponding to this spatial relationship, cij=[ci;cj],ciFighting entities a for a targetiClass (c) ofjFor the surrounding combat entity ajA category of (1);
and step 3: establishing a space LSTM model, regarding the space interaction among different combat entities as a time sequence processed by the space LSTM model, and extracting the space interaction characteristics of the combat entities based on the space LSTM model
Step 3.1: according to spatial relationshipCreating a spatial relationship embedded vectorAs shown in equation (1):
wherein, Wspa1And bspa1Respectively representing a weight matrix and a bias vector of an embedding function of formula (1), phi (·) is a nonlinear activation function;
establishing a space LSTM model, excavating space interaction between a target combat entity and surrounding combat entities along with time change by utilizing the long-short term memory LSTM model, extracting space interaction characteristics, and adding memory units in each nerve unit of a hidden layer of the long-short term memory LSTM model, wherein each memory unit comprises 1 cell and 3 gates; wherein the long-short term memory (LSTM) model is a neural network for processing sequence data. Existing LSTM models are used to analyze time series, while spatial LSTM models can perform better in longer series.
Step 3.2: to embed vectorAs an input to the spatial LSTM model, the output vector of the spatial LSTM model is shown in equation (2)Is a hidden state vector, is a potential representation containing spatially useful information,describing the interaction between the target and the surrounding combat entities over time, Wspa2And bspa2Respectively representing a weight matrix and a bias vector of the space LSTM model;
and 4, step 4: establishing an attention module, and extracting the comprehensive influence of surrounding combat entities on target combat entities based on the attention module
The attention module inputs a set of spatial interaction characteristics between the target operational entity and each surrounding operational entity at time tAnd the overall motion characteristics of surrounding combat entities at the current momentThe output is the comprehensive influence of all surrounding combat entities on the target combat entity at the current momentThe dynamic correlation among different entities at the time t is captured in a self-adaptive manner through an attention mechanism;
step 4.1: defining a calculation strategy of the attention score;
and (3) defining a calculation strategy of the attention scores, and calculating the attention score of each space interaction feature as shown in formula (3).
Wherein o isij tFor each spatial interaction feature's attention score at time t,respectively represent the overall motion characteristics of the surrounding combat entities at the current moment,representing spatial interaction features, Vl,WlAnd UlWeight matrices, b, respectively representing the attention function of equation (3)lIn order to be a vector of the offset,
step 4.2: the attention score is numerically normalized by a softmax function, and the sum of all element values after normalization is 1, as shown in formula (4).
WhereinRepresenting surrounding combat entity ajThe motion state of the target fighting entity aiDegree of influence of future trajectories;
step 4.3: calculating the comprehensive influence of surrounding combat entities on a target combat entity
Fighting the target entity a according to different interaction degreesiThe weighted summation of the hidden state vectors of all surrounding combat entities, as shown in equation (5), yields the composite impactEvaluating a target combat entity aiSpatial interaction with surrounding combat entities, i.e. all other entities against the target combat entity aiThe combined effect of (a);
and 5: designing a dynamic attention mechanism based on an attention LSTM model, and aiming at the motion characteristics of the whole surrounding combat entityUpdating is carried out;
the attention LSTM model is a recurrent neural network, namelyThe long-short term memory LSTM model is combined with the calculated attention score, and the overall motion characteristics of the surrounding combat entity are increased in the attention LSTM modelContinuously updated over time;
the updating is specifically to utilize the comprehensive influence of the surrounding combat entities at the current moment on the target combat entityGlobal motion characteristics of surrounding entitiesCorrection is carried out toAnd the motion characteristics of surrounding combat entities at the current momentCollectively as input to an attention LSTM model for updating motion characteristics of surrounding entities at a next time instantAs shown in formula (6), wherein WattnAnd battnThe weight matrix and the bias vector of the attention LSTM model are represented separately.
Step 6: extracting the motion characteristics of the target combat entity along with the time change based on the time LSTM model;
step 6.1: extracting heterogeneous characteristics of the target combat entity;
the heterogeneous characteristics are the size, the speed, the turning radius and the type of the target combat entity;
as shown in equation (7), when the angular velocity ω is constant, the linear velocity v increases as the radius r increases.
v=ωr (7)
Step 6.2: aggregating movement characteristics of the target combat entities;
according to the motion state f of the target entityi tAnd the global motion state of the surrounding entity output by the attention LSTM moduleCreating motion state embedded vectors ei tAs input to the temporal LSTM module, wherein the temporal LSTM model is an LSTM model that combines the motion state of the target operational entity and the motion state of the surrounding operational entities to analyze the entity trajectory time series, as shown in equation (8), where W istem1、Wtem2And btem1Respectively representing a weight matrix and a bias vector of an embedding function of formula (8), phi (·) is a nonlinear activation function;
analyzing the track time sequence of the target combat entity by using a time LSTM model, and vectorizing the characteristicsInput into the temporal LSTM model, W is shown in equation (9)tem2And btem2The weight matrix and the bias vector of the temporal LSTM model are represented separately.
Wherein the movement characteristics of the bodyFor the output of the temporal LSTM model,to show the eyesThe movement characteristics of the bidding combat entity at the next moment;
and 7: predicting the position of the target combat entity at the next moment by utilizing a probability density function;
step 7.1: the position of the combat entity at the next moment is arranged to follow two-dimensional normal distribution, the probability density function of the two-dimensional normal distribution is shown in a formula (10), and the expectation of the parameters is containedStandard deviation ofAnd correlation coefficientBy the hidden vector at time tDetermining:
μxmu is the expectation of a two-dimensional normal distribution of the lateral dimensionsyExpectation of longitudinal dimension for two-dimensional normal distribution, σxStandard deviation, σ, of the transverse dimension of a two-dimensional normal distributionyStandard deviation of two-dimensional normal distribution longitudinal dimension, i corresponds to target operational entity aiX is the horizontal predicted position of the target operational entity, y is the vertical predicted position of the target operational entity, and the target operational entity is predicted by the pairLinear transformation to obtain parameters of a two-dimensional normal distribution, WpAs a weight matrix, bpIs a bias vector, as shown in equation (11).
Step 7.2: the parameter mu to be obtainedt+1 i,σt+1 i,The probability density of different positions in the designated area is calculated by substituting into formula (10), and the predicted position of the combat entity at the next moment is shown by formula (12):
and 8: training a space LSTM model, an attention LSTM model and a time LSTM model by minimizing a negative log likelihood loss function of an ith target combat entity;
the negative log-likelihood loss function is shown in equation (13):
li is the negative log-likelihood loss function of the target combat entity ai, Tpred is the prediction time period, Tobs is the history time period, f () is equation (10), xi tFighting entities a for a targetiLateral prediction of position, y, at time t +1i tThe position is predicted longitudinally for the target combat entity ai at time t + 1.
Iteration is performed using backpropagation and gradient descent to optimize the entire model of spatial LSTM, attention LSTM, and temporal LSTM based on a negative log-likelihood loss function to minimize the position error at each time step.
Adopt the produced beneficial effect of above-mentioned technical scheme to lie in:
the invention provides a mobile cluster trajectory prediction method based on a space attention network. A dynamic spatial attention mechanism is designed based on the LSTM model to adaptively assign weights to all spatial interactions, and capture different influences from other surrounding entities. In particular, the invention implicitly considers the influence of the size, speed and turning radius of the combat entity on the self-movement behavior so as to accurately predict the future track of the combat entity. The method considers as many influence factors as possible to improve the accuracy of the trajectory prediction of the combat entity and ensure the accurate attack on the enemy combat cluster in the battlefield environment.
Drawings
FIG. 1 is a flow chart of a method for predicting a trajectory of a mobile cluster based on a spatial attention network according to the present invention;
FIG. 2 is a schematic diagram of the original movement behavior of different combat entities in the battlefield environment of the present invention;
FIG. 3 is a schematic diagram of the movement behavior of different combat entities in the battlefield environment adjusted by spatial interaction;
FIG. 4 is a block diagram of an attention module of the present invention;
FIG. 5 is a schematic diagram of the dynamic attention mechanism of the present invention;
FIG. 6 is a block diagram of a mobile feature aggregation module according to the present invention;
FIG. 7 is a schematic diagram of the probability density of trajectory prediction in the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
A method for predicting a moving cluster trajectory based on a spatial attention network, as shown in FIG. 1, includes the following steps:
step 1: dividing all mobile units in the combat space into target combat entities and surrounding combat entities;
the target combat entity is an entity with a predicted movement track;
the surrounding combat entity is other entities which run around the target combat entity and can influence the advancing direction of the target combat entity;
the embodiment of the invention solves the problem of track prediction of heterogeneous combat entities in a battlefield environment. This scene mainly relates to two aspects, at first, has a plurality of entities of fighting such as unmanned aerial vehicle, war chariot, infantry in the scene of cooperative operation, and secondly, in actual operation environment, each unit of fighting distributes densely, and the motion trail of every entity all can be influenced by other entities on every side. In order to distinguish different entities in the track prediction process, an entity of which the movement track is to be predicted is called a target combat entity, and other entities which travel around the target combat entity and can influence the advancing direction of the target combat entity become surrounding combat entities.
Step 2: constructing the motion state f of the fighting entity according to the motion states, namely the position, the size and the type, of the target fighting entity and the surrounding target entitiesi tAnd spatial relationshipAnd form a sequence of motion states of the target combat entityAnd the spatial relationship sequence between the entity and the surrounding combat entity
The state of motion fi tFighting entities a for a targetiThe motion state at time t, including position, size, and type, is recorded asWhereinAndrespectively representing the x-and y-coordinates, l, of the target combat entity at time tiAnd wiRespectively representing the length and width of the target combat entity, ciClass representing target combat entity, ciE {1,2,3}, 1 denotes nobodyA plane, 2 for a combat tank, 3 for an infantry;
the spatial relationshipConsidering the interaction between the target operational entity and different surrounding operational entities, the target operational entity a at the moment tiAnd surrounding combat entity ajThe spatial characteristics between are defined asWherein the content of the first and second substances, for indicating the positional relationship between two combat entities, cijIs a unique code corresponding to this spatial relationship, cij=[ci;cj],ciFighting entities a for a targetiClass (c) ofjFor the surrounding combat entity ajA category of (1); the definition method expresses the spatial dependence of two combat entities to a certain extent.
fi tAndincluding the position, displacement, size, and category of each of the combat entities. It is readily apparent that the velocity of a combat entity can be represented by its displacement in adjacent frames. The displacement is different and the speed is different. There are significant differences in turn radius for different sizes and categories of combat entities. Further, the speed is closely related to the turning radius. The faster the speed, the larger the turning radius. For example, if a vehicle is not turning at the correct speed, it may lose control, such as tipping over. Thus, the state of motion fi tAnd spatial relationshipIs implicitly defined by taking into account two factors, the speed of the combat entityAnd a turning radius.
And step 3: establishing a space LSTM model, regarding the space interaction among different combat entities as a time sequence processed by the space LSTM model, and extracting the space interaction characteristics of the combat entities based on the space LSTM model
In a battlefield environment, there are several other entities around each target operational entity, whose future trajectory may be influenced by the movements of the surrounding operational entities, i.e. there may be spatial interactions between different operational entities, as shown in fig. 2 and 3. In FIG. 2, entity asNormally move along the originally planned path, at which time entity a1Suddenly to asAnd (4) moving. To ensure driving safety, asThe original motion trajectory must be changed, for example, accelerated or the direction of movement changed. At the same time, asAnd also according to the front combat entity a2,a4,a6Is moved in a reasonable manner to avoid collisions as shown in figure 3.
Over time, the spatial relationships between the combat entities also change as the state of motion changes. For example, if there are two adjacent slow-moving vehicles on the battlefield, and one of them suddenly increases in speed at the next moment, their spatial correlation becomes more compact with the increase in speed, because the change in speed causes the relative position of the entities to change, thereby affecting the spatial relationship between the two. To this end, the spatial interaction between different combat entities is considered as a time series that can be processed by the LSTM model, so that the change in interaction characteristics over time can be captured.
Step 3.1: according to spatial relationshipCreating a spatial relationship embedded vectorAs shown in equation (1):
wherein, Wspa1And bspa1Respectively representing a weight matrix and a bias vector of an embedding function of formula (1), phi (·) is a nonlinear activation function;
establishing a space LSTM model, excavating space interaction between a target combat entity and surrounding combat entities along with time by using the long-short term memory LSTM model, extracting space interaction characteristics, and adding memory units in each nerve unit of a hidden layer of the long-short term memory LSTM model, wherein each memory unit comprises 1 cell and 3 gates.
Wherein the Long short-term memory (LSTM) model is a neural network for processing sequence data. Existing LSTM models are used to analyze time series, while spatial LSTM models can perform better in longer series.
Step 3.2: to embed vectorAs an input to the spatial LSTM model, the output vector of the spatial LSTM model is shown in equation (2)Is a hidden state vector, is a potential representation containing spatially useful information,describing the interaction between the target and the surrounding combat entities over time, Wspa2And bspa2Respectively representing a weight matrix and a bias vector of the space LSTM model;
and 4, step 4: establishing an attention module, extracting surrounding combat entities to fight the target based on the attention moduleIntegrated influence of entities
The motion method of the surrounding combat entities can indirectly change the behavior tracks of the target combat entities, so that different spatial interaction feature vectors need to be aggregated. For different combat entities on the same battlefield, due to different factors such as positions, speeds and the like, the influence degrees of other entities on the target combat entity are different. For example, the effects of the movement of other entities in the direction of travel of the target operational entity on the target operational entity should be prioritized, including the change in trajectory of the preceding entity on its own due to sudden deceleration or sudden acceleration of the following entity. In addition, it is easy to determine that the entity with a large speed change is more dangerous. Thus, the degree of interaction between the combat entities is also different. If the change of the interaction degree between the entities can be captured in time, the possibility of future accidents of the entities can be greatly reduced, and the method plays a key role in a track prediction algorithm.
The attention mechanism is a data processing method in machine learning and is widely applied to different types of tasks. The attention mechanism refers to the expression of human attention, can focus on important information, can lead the neural network to select the input of the characteristics and concentrate on certain characteristics, and relieves the information overload problem to a certain extent. The idea of attention mechanism is well-suited to the problem to be solved by the present invention, namely to selectively capture useful interaction features and to selectively ignore useless interaction features.
The present invention designs a new attention module that selectively focuses on some important features and selectively ignores other features. As previously mentioned, there are spatial interactions between the combat entities, but not all interactions have a significant impact on the target entity. In addition, for each entity, the interaction degree of other entities with the target combat entity is different due to different factors such as position, speed and the like. For example, we should prioritize the effect of the movements of the fighter entity in the direction of travel of the target fighter entity on the target fighter entity, including the change in its trajectory by sudden deceleration of the front entity or sudden acceleration of the rear entity. In addition, it is easy to determine that the entity with a large speed change is more dangerous. The spatial interaction effects between the combat entities are also different.
The attention module is shown in FIG. 4, and the input is a set of spatial interaction characteristics between the target operational entity and each surrounding operational entity at time tAnd the overall motion characteristics of surrounding combat entities at the current momentThe output is the comprehensive influence of all surrounding combat entities on the target combat entity at the current momentThe dynamic correlation among different entities at the time t is captured in a self-adaptive manner through an attention mechanism;
step 4.1: defining a calculation strategy of the attention score;
and (3) defining a calculation strategy of the attention scores, and calculating the attention score of each space interaction feature as shown in formula (3).
Wherein o isij tFor each spatial interaction feature's attention score at time t,respectively represent the overall motion characteristics of the surrounding combat entities at the current moment,representing spatial interaction features, obtained from a spatial LSTM model, Vl,WlAnd UlIndividual watchWeight matrix representing the attention function of equation (3), blIn order to be a vector of the offset,
step 4.2: and (4) carrying out numerical normalization on the attention score by using a softmax function, wherein the sum of the numerical values of all elements after normalization is 1, and meanwhile, the proportion of the important elements can be emphatically displayed, as shown in formula (4).
WhereinRepresenting surrounding combat entity ajThe motion state of the target fighting entity aiDegree of influence of future trajectories;
step 4.3: calculating the comprehensive influence of surrounding combat entities on a target combat entity
Fighting the target entity a according to different interaction degreesiThe weighted summation of the hidden state vectors of all surrounding combat entities, as shown in equation (5), yields the composite impactEvaluating a target combat entity aiSpatial interaction with surrounding combat entities, i.e. all other entities against the target combat entity aiThe combined effect of (a);
and 5: designing a dynamic attention mechanism based on an attention LSTM model, and aiming at the motion characteristics of the whole surrounding combat entityUpdating is carried out;
actual combatThe motion track of the body is obviously a sequence which changes along with time, but the overall motion state of surrounding combat entities can also change dynamically due to the complex spatial relationship among the entities. Over time, changes in the position and speed of the combat entities can cause the interaction effects between the entities to change. If the same d is used at different timesi tWhen the attention score is calculated with each space interactive feature, the potential time dependency relationship is lost to a certain extent, so that the feature mining process is not consistent enough, the previous scene cannot be utilized to perform subsequent prediction, and the performance of the model is reduced.
To accurately capture this time dependence, a dynamic attention mechanism is used, employing long-short term memory artificial neural network (LSTM) pairsAnd updating to calculate the weight values which imply the time information at different moments.
The attention LSTM model is a recurrent neural network, combines a long-term and short-term memory LSTM model with the calculated attention score, and increases the motion characteristics of the whole surrounding combat entity in the attention LSTM modelContinuously updated over time; this structure may tie historical valuable information to the current task, allowing the information to persist across the timeline. The state information of the previous time helps understanding of the current time to accurately process the current task, and the problem of the present invention can be solved.
The key point of concern of the dynamic attention mechanism is the overall motion characteristic of the fighting entity around the current momentThe invention completes the pairs based on LSTMAnd (4) updating.The present invention will use LSTM in different modules, and for ease of distinction, the LSTM in that module will be referred to as the attention LSTM. The structure of the dynamic attention mechanism is shown in fig. 5, in which the attention module is implemented by step 4.
The updating is specifically to utilize the comprehensive influence of the surrounding combat entities at the current moment on the target combat entityGlobal motion characteristics of surrounding entitiesCorrection is carried out toAnd the motion characteristics of surrounding combat entities at the current momentCollectively as input to an attention LSTM model for updating motion characteristics of surrounding entities at a next time instantAs shown in formula (6), wherein WattnAnd battnThe weight matrix and the bias vector of the attention LSTM model are represented separately.
In this way, it can be ensuredThe information is further screened after passing through the LSTM model, useful information is reserved and transmitted, useless information is discarded, and the information is continuously changed according to the real-time state of surrounding entities. Attention derived from LSTMAnd also given a richer meaning that integrates the effect of all surrounding combat entity movements on the target combat entity's future trajectory, being a modified global motion profile of the surrounding entities.
Step 6: extracting the motion characteristics of the target combat entity along with the time change based on the time LSTM model;
step 6.1: extracting heterogeneous characteristics of the target combat entity;
different combat entities on a battlefield have different movement patterns. For example, the positions of these entities change with time due to changes in the motion states of the entities, such as speed, direction of rotation, etc. It is readily apparent that the trajectory of the combat entity is time dependent.
Different types of mobile entities exist in the battle scene, including unmanned aerial vehicles, combat vehicles, infantries and the like. Different types of combat entities have different movement patterns including size, speed, turning radius, etc. These factors can also affect the operability of a combat entity, especially when the entity interacts with other surrounding entities over a range of distances. For example, a tank with a large turning radius cannot change its direction in a short time to avoid collision, while an infantry can be flexibly adjusted. The accuracy requirement for the future behavior planning of each entity in the battle scene is higher.
The heterogeneous characteristics are the size, the speed, the turning radius and the type of the target combat entity;
state of motion of a combat entityOnly including location, size, category attributes, whereas the aforementioned speed and turn radius can be determined by the location and size of the combat entity. The displacement of adjacent time instants can be obtained by the position of the entity. The displacement is different, and the speed is different, so the speed change condition of the combat entity can be obtained according to the displacement. Furthermore, there are significant differences in turn radius between different sizes and categories of combat entities. The body is turned to the left or right in the direction of travel to the limit, the body is turned to a circular motion, and thisThe radius of the circle formed by the solid body in the turning is the turning radius. The velocity is closely related to the turning radius, and as shown in equation (7), when the angular velocity ω is constant, the linear velocity v increases as the radius r increases. For example, if the vehicle is not turning at the correct speed, the vehicle may be out of the control of the person and lose control, causing dangerous accidents such as a vehicle rollover. It can be found from the definition of the motion state of the entity, which contains other information besides the position, namely the size and the type, and these indicate that the combat entity is a heterogeneous entity.
v=ωr (7)
From the above analysis it can be concluded that the model proposed by the present invention implicitly takes into account the speed and turning radius of the combat entities, since the invention includes both the position and size parameters of the different entities in the definition of the motion state. For heterogeneous combat entities, the method can make corresponding judgment, summarize the similarity of the combat entity movement models of the same category by dividing the combat entities according to the category, highlight the difference among different categories and improve the accuracy of track prediction.
Step 6.2: aggregating movement characteristics of the target combat entities;
in order to improve the accuracy of the trajectory prediction of the combat entity, the type diversity of the combat entity, the time correlation of the motion mode of the target entity and the space correlation among the entities need to be comprehensively considered. In this step, LSTM is used to extract the motion characteristics of the target entity itself over time, and the specific structure of the temporal LSTM module is shown in fig. 6.
According to the motion state of the target entityAnd the global motion state of the surrounding entity output by the attention LSTM moduleCreating motion state embedded vectorsTo do asIs the input of the temporal LSTM module, wherein the temporal LSTM model is an LSTM model which combines the motion state of the target operational entity and the motion state of the surrounding operational entities to analyze the entity track time series, as shown in formula (8), wherein W istem1、Wtem2And btem1Respectively representing a weight matrix and a bias vector of an embedding function of formula (8), phi (·) is a nonlinear activation function;
due to fi tThe method comprises entity heterogeneous information, and the diversity of the combat entities is considered. In addition, because the behavior of the combat entity is influenced by other combat entities, the overall motion state of the surrounding entities is changedAs another part of the input to take into account spatial correlation between entities.
Analyzing the track time sequence of the target combat entity by using a time LSTM model, and vectorizing the characteristicsInput into the temporal LSTM model, W is shown in equation (9)tem2And btem2The weight matrix and the bias vector of the temporal LSTM model are represented separately.
Wherein the movement characteristics of the bodyFor the output of the temporal LSTM model,representing the movement characteristics of the target combat entity at the next moment in time, which is the predicted targetKey features of the next location of the entity. Due to the introduction of the time LSTM model, the part also considers the time correlation of the self motion mode of the target entity. It should be noted that, in order to embody the heterogeneous characteristics of the combat entity, the present invention sets that only entities of the same type share the same model parameters.
And 7: predicting the position of the target combat entity at the next moment by utilizing a probability density function;
the position of the combat entity at the next moment is arranged to follow two-dimensional normal distribution, the probability density function of the two-dimensional normal distribution is shown in a formula (10), and the expectation of the parameters is containedStandard deviation ofAnd correlation coefficientBy the hidden vector at time tDetermining:
μxmu is the expectation of a two-dimensional normal distribution of the lateral dimensionsyExpectation of longitudinal dimension for two-dimensional normal distribution, σxStandard deviation, σ, of the transverse dimension of a two-dimensional normal distributionyStandard deviation of two-dimensional normal distribution longitudinal dimension, i corresponds to target operational entity aiX is the horizontal predicted position of the target operational entity, y is the vertical predicted position of the target operational entity, and the target operational entity is predicted by the pairLinear transformation to obtain parameters of a two-dimensional normal distribution, WpAs a weight matrix, bpIs a bias vector, as shown in equation (11). To be obtainedParameter mut+1 i,
σt+1 i,The probability density of different positions in the designated area is calculated by substituting into formula (10), and the predicted position of the combat entity at the next moment is shown by formula (12): as can be seen from the definition of the probability density, if the probability density of a certain interval is higher, it is considered that the possibility that the entity moves to the interval at the next time is higher. FIG. 7 visualizes the predicted probability density of a trajectory, with the x-axis and y-axis representing the lateral and longitudinal position of the entity, and the z-axis representing the predicted probability density of a location. As can be seen from fig. 7, the probability that the location of the entity at the next moment is within the middle zone is higher.
And 8: training a space LSTM model, an attention LSTM model and a time LSTM model by minimizing a negative log likelihood loss function of an ith target combat entity;
the objective of the present invention is to make the trajectory prediction result closer to the true trajectory, so the negative log-likelihood loss function is shown in equation (13):
li is the negative log-likelihood loss function of the target combat entity ai, Tpred is the prediction time period, Tobs is the history time period, f () is equation (10), xi tFighting entities a for a targetiLateral prediction of position, y, at time t + 1i tThe position is predicted longitudinally for the target combat entity ai at time t + 1.
Iteration is performed using backpropagation and gradient descent to optimize the entire model of spatial LSTM, attention LSTM, and temporal LSTM based on a negative log-likelihood loss function to minimize the position error at each time step.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions and scope of the present invention as defined in the appended claims.
Claims (8)
1. A method for predicting a moving cluster track based on a spatial attention network is characterized by comprising the following steps:
step 1: dividing all mobile units in the combat space into target combat entities and surrounding combat entities;
wherein the target combat entity is an entity of which a movement track is to be predicted; the surrounding combat entity is other entities which run around the target combat entity and can influence the advancing direction of the target combat entity;
step 2: constructing the motion state f of the fighting entity according to the motion states, namely the position, the size and the type, of the target fighting entity and the surrounding target entitiesi tAnd spatial relationshipAnd form a sequence of motion states of the target combat entityAnd the spatial relationship sequence between the entity and the surrounding combat entity
And step 3: establishing nullInter-space LSTM model, regarding the space interaction between different combat entities as the time sequence processed by the space LSTM model, extracting the space interaction characteristics of the combat entities based on the space LSTM model
And 4, step 4: establishing an attention module, and extracting the comprehensive influence of surrounding combat entities on target combat entities based on the attention module
And 5: designing a dynamic attention mechanism based on an attention LSTM model, and aiming at the motion characteristics of the whole surrounding combat entityUpdating is carried out;
step 6: extracting the motion characteristics of the target combat entity along with the time change based on the time LSTM model;
and 7: predicting the position of the target combat entity at the next moment by utilizing a probability density function;
and 8: the space LSTM model, the attention LSTM model and the time LSTM model are trained by minimizing the negative log likelihood loss function of the ith target combat entity, and the prediction of the moving cluster track is realized.
2. The method according to claim 1, wherein the motion state f in step 2 is the motion state fi tFighting entities a for a targetiThe motion state at time t, including position, size, and type, is recorded asWhereinAndrespectively representing the x-and y-coordinates, l, of the target combat entity at time tiAnd wiRespectively representing the length and width of the target combat entity, ciClass representing target combat entity, ciE {1,2,3}, wherein 1 represents an unmanned aerial vehicle, 2 represents a combat tank, and 3 represents an infantry;
the spatial relationshipFighting the target at the moment t with the entity aiAnd surrounding combat entity ajThe spatial characteristics between are defined asWherein the content of the first and second substances,for indicating the positional relationship between two combat entities, cijIs a unique code corresponding to this spatial relationship, cij=[ci;cj],ciFighting entities a for a targetiClass (c) ofjFor the surrounding combat entity ajThe category (2).
3. The method according to claim 1, wherein the step 3 specifically includes the following steps:
step 3.1: according to spatial relationshipCreating a spatial relationship embedded vectorAs shown in equation (1):
wherein, Wspa1And bspa1Respectively representing a weight matrix and a bias vector of an embedding function of formula (1), phi (·) is a nonlinear activation function;
establishing a space LSTM model, excavating space interaction between a target combat entity and surrounding combat entities along with time change by utilizing the long-short term memory LSTM model, extracting space interaction characteristics, and adding memory units in each nerve unit of a hidden layer of the long-short term memory LSTM model, wherein each memory unit comprises 1 cell and 3 gates; wherein the long-short term memory (LSTM) model is a neural network for processing sequence data to analyze time series;
step 3.2: to embed vectorAs an input to the spatial LSTM model, the output vector of the spatial LSTM model is shown in equation (2)Is a hidden state vector, is a potential representation containing spatially useful information,describing the interaction between the target and the surrounding combat entities over time, Wspa2And bspa2Respectively representing a weight matrix and a bias vector of the space LSTM model;
4. the method according to claim 1, wherein the step 4 specifically includes the following steps:
step 4.1: defining a calculation strategy of the attention score;
defining an attention score calculation strategy, and calculating the attention score of each space interaction feature, as shown in formula (3):
wherein o isij tFor each spatial interaction feature's attention score at time t,respectively represent the overall motion characteristics of the surrounding combat entities at the current moment,representing spatial interaction features, Vl,WlAnd UlWeight matrices, b, respectively representing the attention function of equation (3)lIs a bias vector;
establishing an attention module having inputs of a set of spatial interaction characteristics between a target operational entity and each surrounding operational entity at time tAnd the overall motion characteristics of surrounding combat entities at the current momentThe output is the comprehensive influence of all surrounding combat entities on the target combat entity at the current momentThe dynamic correlation among different entities at the time t is captured in a self-adaptive manner through an attention mechanism;
step 4.2: the attention score is numerically normalized by using a softmax function, and the sum of all normalized element values is 1, as shown in formula (4):
whereinRepresenting surrounding combat entity ajThe motion state of the target fighting entity aiDegree of influence of future trajectories;
step 4.3: calculating the comprehensive influence of surrounding combat entities on a target combat entity
Fighting the target entity a according to different interaction degreesiThe weighted summation of the hidden state vectors of all surrounding combat entities, as shown in equation (5), yields the composite impactEvaluating a target combat entity aiSpatial interaction with surrounding combat entities, i.e. all other entities against the target combat entity aiThe combined effect of (a);
5. the method as claimed in claim 1, wherein the attention LSTM model in step 5 is a recurrent neural network, and the calculated attention score is combined with the long-short term memory LSTM model to increase the overall motion characteristics of the surrounding combat entity in the attention LSTM model Continuously updated over time;
the updating is specifically to utilize the comprehensive influence of the surrounding combat entities at the current moment on the target combat entityGlobal motion characteristics of surrounding entitiesCorrection is carried out toAnd the motion characteristics of surrounding combat entities at the current momentCollectively as input to an attention LSTM model for updating motion characteristics of surrounding entities at a next time instantAs shown in formula (6), wherein WattnAnd battnWeight matrix and bias vector representing attention LSTM model, respectively:
6. the method according to claim 1, wherein the step 6 specifically includes the following steps:
step 6.1: extracting heterogeneous characteristics of the target combat entity;
the heterogeneous characteristics are the size, the speed, the turning radius and the type of the target combat entity;
as shown in equation (7), when the angular velocity ω is constant, the linear velocity v increases as the radius r increases;
v=ωr (7)
step 6.2: aggregating movement characteristics of the target combat entities;
according to the motion state f of the target entityi tAnd the global motion state of the surrounding entity output by the attention LSTM moduleCreating motion state embedded vectors ei tAs input to the temporal LSTM module, wherein the temporal LSTM model is an LSTM model that combines the motion state of the target operational entity and the motion state of the surrounding operational entities to analyze the entity trajectory time series, as shown in equation (8), where W istem1、Wtem2And btem1Respectively representing a weight matrix and a bias vector of an embedding function of formula (8), phi (·) is a nonlinear activation function;
analyzing the track time sequence of the target combat entity by using a time LSTM model, and vectorizing the characteristicsInput into the temporal LSTM model, W is shown in equation (9)tem2And btem2Respectively representing a weight matrix and a bias vector of the time LSTM model;
7. The method according to claim 1, wherein the step 7 specifically includes the following steps:
step 7.1: the position of the combat entity at the next moment is arranged to follow two-dimensional normal distribution, the probability density function of the two-dimensional normal distribution is shown in a formula (10), and the expectation of the parameters is containedStandard deviation ofAnd correlation coefficientBy the hidden vector at time tDetermining:
μxmu is the expectation of a two-dimensional normal distribution of the lateral dimensionsyExpectation of longitudinal dimension for two-dimensional normal distribution, σxStandard deviation, σ, of the transverse dimension of a two-dimensional normal distributionyStandard deviation of two-dimensional normal distribution longitudinal dimension, i corresponds to target operational entity aiX is the horizontal predicted position of the target operational entity, y is the vertical predicted position of the target operational entity, and the target operational entity is predicted by the pairLinear transformation is performed to obtain parameters of the two-dimensional normal distribution,Wpas a weight matrix, bpIs a bias vector, as shown in equation (11);
step 7.2: the parameter mu to be obtainedt+1 i,σt+1 i,The probability density of different positions in the designated area is calculated by substituting into formula (10), and the predicted position of the combat entity at the next moment is shown by formula (12):
8. the method for predicting the trajectory of the mobile cluster based on the spatial attention network as claimed in claim 1, wherein the negative log likelihood loss function in step 8 is shown as formula (13):
Lifighting entities a for a targetiNegative log-likelihood loss function of, TpredTo predict the time period, TobsFor the history period, f () is formula (10), xi tFighting entities a for a targetiLateral prediction of position, y, at time t +1i tFighting entities a for a targetiLongitudinally predicting the position at the moment t + 1;
and according to the negative log-likelihood loss function, iteration is carried out by adopting back propagation and gradient descent to optimize the whole model of the space LSTM, the attention LSTM and the time LSTM, so that the position error of each time step is minimized, and the mobile cluster trajectory prediction is realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111629230.0A CN114297529A (en) | 2021-12-28 | 2021-12-28 | Moving cluster trajectory prediction method based on space attention network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111629230.0A CN114297529A (en) | 2021-12-28 | 2021-12-28 | Moving cluster trajectory prediction method based on space attention network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114297529A true CN114297529A (en) | 2022-04-08 |
Family
ID=80971542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111629230.0A Pending CN114297529A (en) | 2021-12-28 | 2021-12-28 | Moving cluster trajectory prediction method based on space attention network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114297529A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114912719A (en) * | 2022-07-15 | 2022-08-16 | 北京航空航天大学 | Heterogeneous traffic individual trajectory collaborative prediction method based on graph neural network |
CN115038140A (en) * | 2022-04-27 | 2022-09-09 | 东北大学 | Multicast routing method based on air-to-ground cluster trajectory prediction |
-
2021
- 2021-12-28 CN CN202111629230.0A patent/CN114297529A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115038140A (en) * | 2022-04-27 | 2022-09-09 | 东北大学 | Multicast routing method based on air-to-ground cluster trajectory prediction |
CN114912719A (en) * | 2022-07-15 | 2022-08-16 | 北京航空航天大学 | Heterogeneous traffic individual trajectory collaborative prediction method based on graph neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110610271B (en) | Multi-vehicle track prediction method based on long and short memory network | |
CN113485380B (en) | AGV path planning method and system based on reinforcement learning | |
Lenz et al. | Deep neural networks for Markovian interactive scene prediction in highway scenarios | |
CN112099496B (en) | Automatic driving training method, device, equipment and medium | |
CN114297529A (en) | Moving cluster trajectory prediction method based on space attention network | |
CN112562328B (en) | Vehicle behavior prediction method and device | |
CN109460065B (en) | Unmanned aerial vehicle cluster formation characteristic identification method and system based on potential function | |
CN115257745A (en) | Automatic driving lane change decision control method based on rule fusion reinforcement learning | |
CN115147790B (en) | Future track prediction method of vehicle based on graph neural network | |
CN111695737A (en) | Group target advancing trend prediction method based on LSTM neural network | |
CN116360503B (en) | Unmanned plane game countermeasure strategy generation method and system and electronic equipment | |
CN110619340B (en) | Method for generating lane change rule of automatic driving automobile | |
CN114511999A (en) | Pedestrian behavior prediction method and device | |
CN113391633A (en) | Urban environment-oriented mobile robot fusion path planning method | |
Mukherjee et al. | Interacting vehicle trajectory prediction with convolutional recurrent neural networks | |
Mänttäri et al. | Learning to predict lane changes in highway scenarios using dynamic filters on a generic traffic representation | |
Ji et al. | Hierarchical and game-theoretic decision-making for connected and automated vehicles in overtaking scenarios | |
Chen et al. | Automatic overtaking on two-way roads with vehicle interactions based on proximal policy optimization | |
CN115270928A (en) | Joint tracking method for unmanned aerial vehicle cluster target | |
CN111121804B (en) | Intelligent vehicle path planning method and system with safety constraint | |
CN109752952A (en) | Method and device for acquiring multi-dimensional random distribution and strengthening controller | |
US20230162539A1 (en) | Driving decision-making method and apparatus and chip | |
Palacios-Morocho et al. | Multipath planning acceleration method with double deep r-learning based on a genetic algorithm | |
Bai et al. | Dynamic multi-UAVs formation reconfiguration based on hybrid diversity-PSO and time optimal control | |
CN114077242A (en) | Device and method for controlling a hardware agent in a control situation with a plurality of hardware agents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |