CN115238832B - CNN-LSTM-based air formation target intention identification method and system - Google Patents

CNN-LSTM-based air formation target intention identification method and system Download PDF

Info

Publication number
CN115238832B
CN115238832B CN202211154275.1A CN202211154275A CN115238832B CN 115238832 B CN115238832 B CN 115238832B CN 202211154275 A CN202211154275 A CN 202211154275A CN 115238832 B CN115238832 B CN 115238832B
Authority
CN
China
Prior art keywords
target
formation
intention
attribute data
aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211154275.1A
Other languages
Chinese (zh)
Other versions
CN115238832A (en
Inventor
周焰
张晨浩
黎慧
毕钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Air Force Early Warning Academy
Original Assignee
Air Force Early Warning Academy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Air Force Early Warning Academy filed Critical Air Force Early Warning Academy
Priority to CN202211154275.1A priority Critical patent/CN115238832B/en
Publication of CN115238832A publication Critical patent/CN115238832A/en
Application granted granted Critical
Publication of CN115238832B publication Critical patent/CN115238832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a CNN-LSTM-based method and a CNN-LSTM-based system for identifying an intention of an aerial formation target, belonging to the field of aerial target identification and comprising the following steps: preprocessing the attribute data of the hollow target, and encoding the intention space intention; distributing all single-target attribute data to a one-dimensional convolutional layer for deep feature extraction; inputting the deep features into an LSTM network to learn the dependency on time, and then acquiring primitive intents by adopting a first Dense layer; according to the coded intention space, the formation and the formation composition, the primitive intentions are completely organized, and the characteristics of the completely organized primitive intentions are learned through a second Dense layer; and taking the third Dense layer as an output layer, and receiving the overall intention of the characteristic identification formation target output by the second Dense layer. The method for identifying the target intention of the aerial formation can meet the requirement of efficient command in the current complex and variable battlefield environment.

Description

CNN-LSTM-based air formation target intention identification method and system
Technical Field
The invention belongs to the field of aerial target identification, and particularly relates to a CNN-LSTM-based aerial formation target intention identification method and system.
Background
The goal intent refers to the goal presetting of the task to be accomplished or the goal to be achieved. Generally, the target intent cannot be directly observed, but is analyzed from the observed behavior and state of the target. The target intention identification refers to the process of analyzing and conjecturing target data observed by various sensors to finally obtain a target operation plan, an operation assumption and the like.
Currently, there is much research on the identification of the intent of a single target, generally for airplane-to-airplane confrontation. The pilot mainly judges the intention of the airplane according to the position, the height, the speed, the acceleration, the course angle, the on-off state of the radar, the electronic interference release and other conditions of the enemy airplane, and therefore corresponding targeted measures are taken. However, in actual combat, the airplane is often presented in a formation form rather than a single-airplane form, and various airplanes in the formation form are matched with each other to jointly complete combat tasks. The intention recognition of the aerial formation target is generally applied to judgment of the air situation of a command authority, and the fighting intention of an oncoming enemy formation is comprehensively judged mainly according to the model composition, formation form, command relationship and the effect of each target in the formation of the enemy formation, so that a basis is provided for decision and command.
Common methods for identifying the formation target intention include template matching, D-S evidence theory, bayesian network reasoning and the like. The template matching needs to construct a corresponding intention recognition template in advance according to empirical knowledge and inference rules, and then match a sample needing to be recognized with the existing template, so that the intention is recognized. The D-S evidence theory is a method for fusing different evidences by adopting probability theory. The Bayesian network reasoning obtains the target intention by constructing a Bayesian network and applying probability theory reasoning on the basis of determining the prior probability according to expert experience. The method is used for carrying out hierarchical reasoning, the intention of the single target is firstly identified, and then the intention of each single target is integrated, so that the intention of the whole formation is identified.
The formation target has the characteristics of large number of entities, large data volume, complex relationship and the like, and the traditional intention identification method needs to rely on prior knowledge to a great extent, namely the prior knowledge plays a decisive role in the accuracy of intention identification, and obviously cannot meet the efficient and intelligent command decision-making requirement in the current complex and variable battlefield environment.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a CNN-LSTM-based method and a CNN-LSTM-based system for identifying the intention of an aerial formation target, aiming at solving the problems that the formation target has the characteristics of a large number of entities, large data volume, complex relationship and the like, and the existing intention identification method depends on prior knowledge to a great extent and cannot meet the efficient and intelligent command problem in the current complex and changeable battlefield environment.
In order to achieve the above object, in one aspect, the present invention provides a CNN-LSTM-based method for identifying an intent of an air formation target, comprising the following steps:
preprocessing the attribute data of the aerial target, dividing the attribute data of the aerial target into numerical attribute data and non-numerical attribute data, carrying out normalization standard processing on the numerical attribute data, converting the non-numerical attribute data into the numerical attribute data, and carrying out intention coding on an intention space; wherein the aerial target attribute data comprises single target attribute data and an attribute space of a formation target;
distributing all the preprocessed single-target attribute data to one-dimensional convolution layers corresponding to all targets for deep feature extraction;
inputting the deep features into an LSTM network to learn the dependency on time, and then acquiring primitive intents corresponding to the single targets by adopting a first Dense layer;
according to the coded intention space, the formation and the formation composition, the primitive intentions of the single target are integrally formed, and the characteristics of the primitive intentions after the integral formation are learned by adopting a second Dense layer;
adopting a third Dense layer as a full connection layer on an output layer, and calculating the probability of various intents through a Softmax function to realize intention identification on a formation target; wherein, the one-dimensional convolution layer, the first Dense layer, the second Dense layer and the third Dense layer are all trained network layers.
Further preferably, the single object attribute data includes: altitude, speed, acceleration, course angle, direction angle, distance, radar reflection area, friend or foe identification response, air radar state, sea radar state and interference state of the aerial target; the attribute space of the formation target includes: primitive intent, formation and formation composition.
Further preferably, the intention space is attack, impersonation, defense, reconnaissance, forewarning, defense and electronic interference.
Further preferably, the method for encoding the intention space is: encoding the attack as 0; the imperial tapping code is 1; the penetration code is 2; the scout code is 3; the early warning code is 4; the defense encoding is 5; the jammer code is 6.
Further preferably, a sliding window with a length of 10 and a step size of 5 is set in the LSTM network.
Further preferably, the formation comprises: wedge, echelon, longitudinal, transverse, or serpentine.
In another aspect, the present invention provides a CNN-LSTM-based system for identifying an intention of an air formation target, comprising:
the data preprocessing module is used for preprocessing the attribute data of the aerial target, dividing the attribute data of the aerial target into numerical attribute data and non-numerical attribute data, carrying out normalization standard processing on the numerical attribute data, and converting the non-numerical attribute data into the numerical attribute data, wherein the attribute data of the aerial target comprises single-target attribute data and an attribute space of a formation target;
an intent encoding module for intent encoding an intent space;
the data distribution module is used for distributing all the preprocessed single-target attribute data to the one-dimensional convolution layer corresponding to each target;
the CNN module is used for receiving the preprocessed single-target attribute data and then performing deep layer feature extraction;
the LSTM module is used for receiving the dependency of deep layer feature learning on time;
the Dense module comprises a first Dense layer, a second Dense layer and a third Dense layer;
the first Dense layer is used for acquiring primitive intents corresponding to the single targets;
the second Dense layer is used for extracting the characteristics of the primitive intentions after the whole editing on the basis of the whole editing of the primitive intentions according to the coded intention space, the formation and the formation composition;
and the third Dense layer is used as an output layer and used for obtaining the probability of various intents through a Softmax function based on the characteristics of the primitive intents after the whole formation so as to realize the intention identification of the formation target.
Further preferably, the single object attribute data includes: the altitude, the speed, the acceleration, the course angle, the direction angle, the distance, the radar reflection area, the friend or foe identification response, the state of an air radar, the state of a sea radar and the interference state of the aerial target; the attribute space of the formation target includes: primitive intent, formation and formation composition.
Further preferably, the intention space is an attack, a posing, a defense, a reconnaissance, a pre-warning, a defense and an electronic disturbance.
Further preferably, the method of encoding the intention space is: encoding the attack as 0; the imperial tapping code is 1; the penetration code is 2; the scout code is 3; the early warning code is 4; the defense encoding is 5; the jammer code is 6.
Further preferably, the formation comprises: wedge, echelon, longitudinal, transverse, or serpentine.
Generally, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
the invention provides a multi-entity CNN-LSTM network aiming at the problem of identification of the intention of an aerial formation target, deep features of attribute data of a single target are extracted through the CNN, important information is stored in the sequence data during LSTM processing, and the intention of the formation target is identified by combining attributes such as formation and the like on the basis of realizing identification of the intention of a single target primitive. Different from the traditional target intention identification method, the method adopts a neural network data driving method, so that the dependence on experience knowledge is avoided; constructing a distributed parallel network structure to realize efficient recognition of single-target primitive intents; and identifying the intention of the formation target on the basis of the primitive intention by combining the formation shape, composition and other information. The invention has reasonable structural design and improves the efficiency and the accuracy of identifying the formation target intentions.
Drawings
FIG. 1 is a schematic diagram of a CNN-LSTM-based method for identifying an intent of an aerial formation target according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of aerial target attribute data provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of intent coding of an intent space provided by an embodiment of the present invention;
fig. 4 is a schematic diagram of the operation of the CNN module according to the embodiment of the present invention;
fig. 5 is a schematic diagram of an operation of an LSTM module provided in an embodiment of the present invention;
fig. 6 is a schematic diagram of a sliding window in an LSTM module provided by an embodiment of the present invention;
fig. 7 is a schematic diagram of the operation of the third sense layer according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Deep learning is an intelligent method capable of learning the internal rules and the expression levels of data, and can identify the formation target intention by extracting deep features in target data acquired by a sensor, so as to provide efficient auxiliary decision for a commander. According to the characteristic of the mutual matching of multiple targets of air formation, the invention provides a CNN-LSTM-based air formation target intention identification method and a CNN-LSTM-based air formation target intention identification system shown in figure 1, and the overall method comprises the following steps: firstly, preprocessing all target data, including determining and standardizing a feature space and determining and coding an intention space; thirdly, constructing a multi-entity CNN-LSTM neural network model, extracting deep features of target data through a Convolutional Neural Network (CNN), wherein the deep features of the target data are time sequence data and are input into the LSTM network for identifying the intention of a single target; and finally realizing the intention identification of the formation target.
Example 1
The invention provides a CNN-LSTM-based method for identifying an intention of an aerial formation target, which comprises the following steps:
s1: data preparation
Generally, the aerial target can realize the intention according to the relevant combat rules, and the combat rules are embodied in the actual behaviors of the aerial target; considering that the target can express different states when different tasks are executed, namely a mapping relation exists between the target state and the target intention; therefore, determining the target features is crucial to identifying the target intent. In the invention, the height, the speed, the acceleration, the course angle, the direction angle, the distance, the radar reflection area, the identification response of enemies and self, the state of an air radar, the state of a sea radar and the interference state of an aerial target are used as single-target attribute data for intention identification;
in order to accurately identify an aerial target intent, its intent space needs to be determined. The method fully considers various task requirements and combat styles of the target, and determines the intention space of the target as { attack, impersonation attack, sudden defense, reconnaissance, early warning, defense and electronic interference }; however, because the behavior states of the single target and the formation target are different, it is necessary to determine the attribute spaces of the single target and the formation target respectively to construct attribute data of the aerial target; the aerial target attribute data and the intention space are shown in fig. 2 and 3;
the aerial target attribute data can be divided into numerical attributes and non-numerical attributes, and the numerical attributes comprise height, speed, course angle, direction angle, distance and formation; the non-numerical attributes comprise an air radar state, a sea radar state, an interference state, a primitive intention and a formation form;
s1.1: aiming at the numerical attribute, carrying out data standardization processing on each characteristic dimension, and setting the mean value of all sample data of the characteristic X as
Figure 572369DEST_PATH_IMAGE001
Standard deviation of
Figure 20668DEST_PATH_IMAGE002
Then the first in the feature XiSample data according to formula
Figure 65984DEST_PATH_IMAGE003
Carrying out standardization treatment; the data after standardization meets the standard normal distribution, namely the mean value is 0 and the standard deviation is 1;
s1.2: in order to facilitate model training, the non-numerical attributes are converted into corresponding numerical attributes, so that model training is facilitated;
s1.3: the intention space of the aerial target is non-numerical data which is used as an identification frame, the intention coding is carried out on the data, a numerical label is formed on sample data, and the corresponding relation between the intention space and the intention coding is shown as the following figure 3;
s2: construction of a Multi-entity CNN-LSTM neural network model
The multi-entity CNN-LSTM is a neural network model capable of identifying a plurality of single-target primitive intents in parallel and identifying a formation target overall intention, and comprises the following steps: data Distribution Layer, CNN, LSTM and Dense; firstly, distributing all single target attribute Data in a formation to a CNN network corresponding to each target through a Data Distribution Layer; secondly, processing single-target attribute data by using a one-dimensional convolutional layer, and extracting deep features; inputting the deep features obtained by extraction into an LSTM network to learn dependence on time; finally, inputting all primitive intents into a second Dense layer, and identifying the intents of the formation targets; more specifically, the method comprises the following steps:
s2.1: single target data partitioning
The data input into the neural network model comprises data of all single targets in the formation; in order to facilitate intention identification, data of each single target is divided separately through a Data Distribution Layer, and preparation is made for subsequent primitive intention identification;
s2.2: deep layer feature extraction
The CNN module is used for extracting deep features, the one-dimensional convolution is one type of convolution, the working principle of the one-dimensional convolution is shown in figure 4, a convolution kernel with the length of 3 slides along a data sequence with the length of 10 and carries out convolution operation, and finally output with the length of 8 is obtained; respectively carrying out deep feature extraction on the data of each single target to realize down-sampling of the target data;
s2.3: processing time series data to identify primitive intents
The LSTM module is used for processing time sequence data, and the main structure is shown in figure 5; extracting deep features of the target data through a CNN module, and inputting the deep features into an LSTM; the target data respectively determines which target information at the last moment needs to be discarded through a forgetting gate, determines which new target information is added to the cell state through an input gate and updates the state, and determines the output at the current moment through an output gate;
the circulation mechanism in the LSTM enables the LSTM to have a certain memory function, so that the LSTM can extract the front and back related information of the target data in the time dimension; by utilizing the characteristics of the LSTM, as shown in FIG. 6, the invention sets a sliding window with a length of 10 and a step length of 5, i.e. each time 10 continuous frames of data are input to the LSTM module, the next time the step length of 5 frames is slid and then 10 frames of data are continuously input;
s2.4 identifying formation intents
Respectively identifying and obtaining primitive intents of each single target through 12 attributes of the single target, but in order to facilitate the sense layer to identify the intents of the formation targets, the primitive intents need to be completely organized, namely, according to an intention space, a formation shape and a formation composition, counting the number of primitive intents contained in each intention aiming at the formation;
the formation target can select a corresponding battle formation according to different battlefield environments and battle tasks, for example: wedge, ladder, longitudinal, transverse and serpentine; combining the primitive intentions, the formation form and the formation form a unified input, so as to identify the intention of the formation target, and inputting the completely-formed primitive intentions into a Dense layer as shown in FIG. 7; the layer connects each neuron of the two layers of front and back with each other, and all the neurons of the layer are connected with all the neurons of the layer of front and back, and the whole intention of the formation target is identified by learning the quantity of the intentions of each primitive and the deep characteristics of the formation form;
s3: network model setup and training
The multi-entity CNN-LSTM-based aerial formation target intention recognition network model is set as follows:
(1) The number of convolution kernels is set to be 16, the sizes of the convolution kernels are all 3 multiplied by 1, the activation function is a ReLU function, and local features of data are extracted through a sliding window;
(2) The LSTM layer sets units to be 16, and the activation function is a ReLU function;
(3) A Dropout layer, which prevents model overfitting by deleting neurons randomly, rate is set to 0.5;
(4) The Dense layer is a full connection layer, and each neuron is connected with all neurons of the previous layer to comprehensively extract features; the number of the neurons of the three Dense layers is respectively set to be 12,8 and 7, and the activation function is a softmax function;
the data of the standardization process is divided into a training set and a testing set according to the proportion of 7.
Example 2
The invention provides a CNN-LSTM-based air formation target intention identification system, which comprises:
the data preprocessing module is used for preprocessing the attribute data of the aerial target, dividing the attribute data of the aerial target into numerical attribute data and non-numerical attribute data, carrying out normalization standard processing on the numerical attribute data, and converting the non-numerical attribute data into the numerical attribute data, wherein the attribute data of the aerial target comprises single-target attribute data and an attribute space of a formation target;
an intent encoding module for intent encoding an intent space;
the data distribution module is used for distributing all the preprocessed single-target attribute data to the one-dimensional convolution layer corresponding to each target;
the CNN module is used for receiving the preprocessed single-target attribute data and then performing deep feature extraction;
the LSTM module is used for receiving the dependency of deep layer feature learning on time;
the Dense module comprises a first Dense layer, a second Dense layer and a third Dense layer;
the first Dense layer is used for acquiring primitive intents corresponding to the single targets;
the second Dense layer is used for extracting the characteristics of the primitive intentions after the whole editing on the basis of the whole editing of the primitive intentions according to the coded intention space, the formation and the formation composition;
and the third Dense layer is used as an output layer and is used for acquiring the probability of various intentions through a Softmax function based on the characteristics of the primitive intentions after the whole formation so as to realize the intention identification of the formation target.
Further preferably, the single object attribute data includes: the altitude, the speed, the acceleration, the course angle, the direction angle, the distance, the radar reflection area, the friend or foe identification response, the state of an air radar, the state of a sea radar and the interference state of the aerial target; the attribute space of the formation target includes: primitive intent, formation and formation.
Further preferably, the intention space is attack, impersonation, defense, reconnaissance, forewarning, defense and electronic interference.
Further preferably, the method for encoding the intention space is: encoding the attack as 0; the mock tapping code is 1; the penetration code is 2; the scout code is 3; the early warning code is 4; the defense encoding is 5; the jammer code is 6.
Further preferably, the formation comprises: wedge, ladder, longitudinal, transverse or serpentine.
In summary, compared with the prior art, the invention has the following advantages:
the invention provides a multi-entity CNN-LSTM network for identifying the intention of an aerial formation target, deep features of single target attribute data are extracted through CNN, important information is stored in sequence data during LSTM processing, and the intention of the formation target is identified by combining attributes such as formation and the like on the basis of realizing the intention identification of single target elements. Different from the traditional target intention identification method, the method adopts a neural network data driving method, so that the dependence on experience knowledge is avoided; constructing a distributed parallel network structure to realize efficient recognition of single-target primitive intents; by combining information of formation, composition and the like, the intention of the formation target is identified on the basis of primitive intention. The invention has reasonable structural design and improves the efficiency and the accuracy of identifying the formation target intentions.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A CNN-LSTM-based method for identifying an intention of an aerial formation target is characterized by comprising the following steps:
preprocessing the attribute data of the aerial target, dividing the attribute data of the aerial target into numerical attribute data and non-numerical attribute data, carrying out normalization standard processing on the numerical attribute data, converting the non-numerical attribute data into the numerical attribute data, and carrying out intention coding on an intention space; wherein the aerial target attribute data comprises single target attribute data and an attribute space of a formation target;
distributing all the preprocessed single-target attribute data to a one-dimensional convolutional layer corresponding to each target for deep feature extraction;
inputting the deep features into an LSTM network to learn the dependency on time, and then acquiring primitive intents corresponding to the single targets by adopting a first Dense layer;
according to the coded intention space, the formation and the formation composition, the primitive intentions are completely organized, and the characteristics of the primitive intentions after the complete organization are learned through the second Dense layer and input into the third Dense layer;
taking the third Dense layer as an output layer, calculating the probability of each type of intention through a Softmax function based on the characteristics of the primitive intention after the whole compilation, and realizing the intention identification of the formation target; wherein, the one-dimensional convolutional layer, the first Dense layer, the second Dense layer and the third Dense layer are all trained network layers.
2. The aerial formation target intent identification method of claim 1, wherein the single target attribute data comprises: the altitude, the speed, the acceleration, the course angle, the direction angle, the distance, the radar reflection area, the friend or foe identification response, the state of an air radar, the state of a sea radar and the interference state of the aerial target; the attribute space of the formation target includes: primitive intent, formation and formation composition.
3. Method of identification of the intent of an aerial formation target according to claim 1 or 2, characterized in that the intent space is attack, impersonation, penetration, reconnaissance, pre-warning, defense and electronic interference.
4. The method for identifying the intention of an aerial formation target according to claim 3, wherein the intention space is encoded by: encoding the attack as 0; the mock tapping code is 1; the penetration code is 2; the scout code is 3; the early warning code is 4; the defense encoding is 5; the jammer code is 6.
5. The air formation target intention recognition method according to claim 1 or 2, wherein the formation form comprises: wedge, ladder, longitudinal, transverse or serpentine.
6. An air formation target intention recognition system based on CNN-LSTM, comprising:
the data preprocessing module is used for preprocessing the attribute data of the aerial target, dividing the attribute data of the aerial target into numerical attribute data and non-numerical attribute data, carrying out normalization standard processing on the numerical attribute data, and converting the non-numerical attribute data into the numerical attribute data, wherein the attribute data of the aerial target comprises single-target attribute data and an attribute space of a formation target;
an intent encoding module for intent encoding an intent space;
the data distribution module is used for distributing all the preprocessed single-target attribute data to the one-dimensional convolution layer corresponding to each target;
the CNN module is used for receiving the preprocessed single-target attribute data and then performing deep layer feature extraction;
the LSTM module is used for receiving the dependency of deep layer feature learning time;
the Dense module comprises a first Dense layer, a second Dense layer and a third Dense layer;
the first Dense layer is used for acquiring primitive intents corresponding to the single targets;
the second Dense layer is used for extracting the characteristics of the primitive intentions after the whole editing on the basis of the whole editing of the primitive intentions according to the coded intention space, the formation and the formation composition;
and the third Dense layer is used as an output layer and is used for acquiring the probability of various intents through a Softmax function based on the characteristics of the wholly edited primitive intents to realize the intention identification of the formation target.
7. The aerial formation target intent recognition system of claim 6, wherein the single target attribute data comprises: the altitude, the speed, the acceleration, the course angle, the direction angle, the distance, the radar reflection area, the friend or foe identification response, the state of an air radar, the state of a sea radar and the interference state of the aerial target; the attribute space of the formation target includes: primitive intent, formation and formation composition.
8. An aerial formation target intention recognition system as claimed in claim 6 or 7, wherein the intention space is attack, impersonation, penetration, reconnaissance, forewarning, defense and electronic interference.
9. The system of claim 8, wherein the intent space is encoded by: encoding the attack as 0; the mock tapping code is 1; the penetration code is 2; the scout code is 3; the early warning code is 4; the defense code is 5; the jammer code is 6.
10. An airborne formation target intention recognition system according to claim 6 or 7, wherein the formation comprises: wedge, ladder, longitudinal, transverse or serpentine.
CN202211154275.1A 2022-09-22 2022-09-22 CNN-LSTM-based air formation target intention identification method and system Active CN115238832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211154275.1A CN115238832B (en) 2022-09-22 2022-09-22 CNN-LSTM-based air formation target intention identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211154275.1A CN115238832B (en) 2022-09-22 2022-09-22 CNN-LSTM-based air formation target intention identification method and system

Publications (2)

Publication Number Publication Date
CN115238832A CN115238832A (en) 2022-10-25
CN115238832B true CN115238832B (en) 2022-12-02

Family

ID=83667613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211154275.1A Active CN115238832B (en) 2022-09-22 2022-09-22 CNN-LSTM-based air formation target intention identification method and system

Country Status (1)

Country Link
CN (1) CN115238832B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188637A (en) * 2019-05-17 2019-08-30 西安电子科技大学 A kind of Activity recognition technical method based on deep learning
CN112232396A (en) * 2020-10-08 2021-01-15 西北工业大学 Fusion identification method for ship formation intention based on LSTM and D-S evidence theory
CN112598046A (en) * 2020-12-17 2021-04-02 沈阳航空航天大学 Target tactical intention identification method in multi-machine collaborative air combat
CN112947581A (en) * 2021-03-25 2021-06-11 西北工业大学 Multi-unmanned aerial vehicle collaborative air combat maneuver decision method based on multi-agent reinforcement learning
CN114117073A (en) * 2021-11-29 2022-03-01 中国人民解放军国防科技大学 Method and device for identifying group intention, computer equipment and readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11034357B2 (en) * 2018-09-14 2021-06-15 Honda Motor Co., Ltd. Scene classification prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188637A (en) * 2019-05-17 2019-08-30 西安电子科技大学 A kind of Activity recognition technical method based on deep learning
CN112232396A (en) * 2020-10-08 2021-01-15 西北工业大学 Fusion identification method for ship formation intention based on LSTM and D-S evidence theory
CN112598046A (en) * 2020-12-17 2021-04-02 沈阳航空航天大学 Target tactical intention identification method in multi-machine collaborative air combat
CN112947581A (en) * 2021-03-25 2021-06-11 西北工业大学 Multi-unmanned aerial vehicle collaborative air combat maneuver decision method based on multi-agent reinforcement learning
CN114117073A (en) * 2021-11-29 2022-03-01 中国人民解放军国防科技大学 Method and device for identifying group intention, computer equipment and readable medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"BiLSTM - Attention: 一种空中目标战术意图识别模型";滕飞等;《航空兵器》;20211031;第28卷(第5期);第24-32页 *
"基于深度学习的空中任务识别方法研究";姚庆锴等;《系统仿真学报》;20170908(第09期);第2227-2231页 *
"非完备信息下无人机空战目标意图预测";刘钻东等;《中国科学》;20200424;第50卷(第5期);第704-717页 *

Also Published As

Publication number Publication date
CN115238832A (en) 2022-10-25

Similar Documents

Publication Publication Date Title
WO2023231995A1 (en) Transfer-learning-based life prediction and health assessment method for aero-engine
CN111240353B (en) Unmanned aerial vehicle collaborative air combat decision method based on genetic fuzzy tree
CN108664924A (en) A kind of multi-tag object identification method based on convolutional neural networks
CN112036556B (en) Target intention inversion method based on LSTM neural network
CN112598046B (en) Target tactical intent recognition method in multi-machine cooperative air combat
CN113435108B (en) Battlefield target grouping method based on improved whale optimization algorithm
CN112052933B (en) Particle swarm optimization-based safety testing method and repairing method for deep learning model
CN114818853B (en) Intention recognition method based on bidirectional gating circulating unit and conditional random field
CN114330509A (en) Method for predicting activity rule of aerial target
CN113561995B (en) Automatic driving decision method based on multi-dimensional reward architecture deep Q learning
Qu et al. Intention recognition of aerial target based on deep learning
Wang et al. Tactical intention recognition method of air combat target based on BiLSTM network
CN115238832B (en) CNN-LSTM-based air formation target intention identification method and system
Wang et al. Learning embedding features based on multisense-scaled attention architecture to improve the predictive performance of air combat intention recognition
CN113065094A (en) Situation assessment method and system based on accumulated foreground value and three-branch decision
CN115964640B (en) Improved template matching-based secondary target grouping method
CN115757828B (en) Aerial target intention recognition method based on radiation source knowledge graph
CN117056738A (en) Battlefield key situation extraction method and system based on soldier chess deduction system
CN115661576A (en) Method for identifying airplane group intention under sample imbalance
CN113887807B (en) Robot game tactics prediction method based on machine learning and evidence theory
CN115204286A (en) Target tactical intention online identification method based on deep learning in simulation environment
CN115422404A (en) Communication radiation source threat assessment method based on knowledge graph representation learning
Hu et al. Research on pest and disease recognition algorithms based on convolutional neural network
CN116466736A (en) Air combat dog bucket rolling maneuver auxiliary decision making method based on decision tree
CN115563861B (en) Performance comprehensive evaluation and optimization method for intelligent tracking algorithm of radar seeker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant