CN113743509A - Incomplete information-based online combat intention identification method and device - Google Patents
Incomplete information-based online combat intention identification method and device Download PDFInfo
- Publication number
- CN113743509A CN113743509A CN202111041309.1A CN202111041309A CN113743509A CN 113743509 A CN113743509 A CN 113743509A CN 202111041309 A CN202111041309 A CN 202111041309A CN 113743509 A CN113743509 A CN 113743509A
- Authority
- CN
- China
- Prior art keywords
- information
- time
- model
- data
- intention
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 46
- 238000012545 processing Methods 0.000 claims abstract description 35
- 238000013136 deep learning model Methods 0.000 claims abstract description 33
- 238000007906 compression Methods 0.000 claims abstract description 28
- 230000006835 compression Effects 0.000 claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 19
- 238000012512 characterization method Methods 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 15
- 108091026890 Coding region Proteins 0.000 claims description 7
- 230000010354 integration Effects 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 3
- 230000007774 longterm Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 14
- 238000005065 mining Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 12
- 238000012360 testing method Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 9
- 239000000523 sample Substances 0.000 description 8
- 230000008859 change Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000003042 antagnostic effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000036648 cognitive speed Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an online combat intention identification method and device based on incomplete information, which are characterized in that information data are obtained through various detection and sensing devices, and historical time-varying situation information formed by continuous tracking signals of each target unit in a time period delta T is obtained; carrying out coding completion compression processing on the historical time-varying situation information to obtain effective input data; inputting the data into a deep learning model for training to obtain a trained deep learning model; and inputting the current intelligence data into the trained deep learning model to obtain a target intention recognition result. By mining the global structure by learners, learning the representation of potential shared information, mining more global structure from limited battlefield information and discarding low-level information and more local noise, considering the time characteristics of target information, designing a time sequence processing model with variable length for learning intent classification, and realizing the on-line intent recognition effect under incomplete information.
Description
Technical Field
The invention belongs to the technical field of target intention identification, and particularly relates to an incomplete information online combat intention identification method and device.
Background
The situation understanding is a process of explaining the current situation according to situation feature vectors generated by situation awareness and military knowledge of domain experts and identifying enemy intentions and battle plans. The identification of the battlefield target fighting intention is always the key point of attention of all levels of commanders, is a hotspot problem in the situation assessment field, and is an important basis for the commanders to determine the next fighting action.
With the continuous development of information technology, a large number of reconnaissance detection and sensing devices are applied to a battlefield, and the collection capacity of information reconnaissance and battlefield data is greatly improved. However, all these needs to be analyzed and used by decision makers, the intelligence world faces the challenge of information overload, and the cognitive speed and processing capacity of human beings are difficult to keep pace with the growth and change of battlefield data, not to mention the tactical intention of quickly and accurately identifying enemy targets from the instantaneously changing battlefield situation. The identification of the target intention belongs to the pattern identification problem under the dynamic and antagonistic conditions, the critical information such as battlefield environment, target attribute, target state and the like is comprehensively considered on the basis of military knowledge and combat experience, the accurate identification of the target intention is realized through a series of highly abstract complex thinking activities such as key feature extraction, comparative analysis, association, reasoning and the like, the process is difficult to describe and summarize through a mathematical formula explicitly, and therefore an efficient intelligent identification model needs to be designed to assist a commander in making a decision so as to shorten the decision time and improve the decision quality. Existing target intent recognition research has focused mainly on template matching, expert systems, bayesian networks, and neural networks. Generally speaking, the template matching method conforms to the cognitive rule of human beings and is easy to realize, but the establishment of the template base depends on the acquisition of the prior knowledge of domain experts, the objectivity and the credibility are difficult to ensure, and the updating of the template base is also difficult; although the expert system has strong knowledge expression and knowledge inference capabilities, the implementation difficulty is high, and the fault-tolerant capability and the learning capability are weak because a complete knowledge base and inference rules need to be abstracted; the Bayesian network has strong causal probability reasoning capability, is widely concerned, and can solve the problem of uncertainty reasoning of intentions, but the prior probability and the conditional probability of each node event of the Bayesian network are difficult to determine. The neural network is successfully applied in various fields, and the self-adaption and self-learning capabilities of the neural network are used for predicting the intention, so that the problem of target intention identification when the prior knowledge of field experts is insufficient can be better solved. However, the traditional shallow neural network has the defects of difficult network training, large difficulty in feature extraction, low calculation precision and the like.
Most importantly, the existing research results rarely discuss the influence of uncertain incomplete and imperfect battlefield situation information on an intelligent model, a war is a typical imperfect information game, and on an antagonistic battlefield, the 'three different' characteristics of the battlefield situation information can be caused by the self concealment, the mutual deception and the uncertainty of the war of all parties, so that the problem that how to realize an efficient online intention identification task becomes an urgent need to be solved in the face of incomplete, untimely and inaccurate, even wrong or deceptive information is solved.
Aiming at the problem, the deep learning model W-CPCLSTM is provided, the representation of potential shared information is learned by means of the Comparative Predictive Coding (CPC), more global structures can be mined from limited battlefield information, low-level information and more local noise are discarded, the time characteristics of target information are comprehensively considered, a variable-length time sequence processing model LSTM is designed to learn intention classification, then the variable-length time sequence processing model LSTM and the target information are effectively combined together based on a weight value representing training attention, and the online intention identification effect of the proposed model under incomplete information is discussed through information of three different detection degrees and the perfect situation under an ideal state. In addition, the effect of different lengths of informative information on the model is discussed.
Disclosure of Invention
The invention provides a method and a device for identifying an online combat intention of incomplete information, aiming at realizing efficient online intention identification when massive incomplete, untimely and inaccurate information, even wrong information or deceptive information is faced.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an incomplete information online combat intention identification method comprises the following steps:
step 1: at the current time t, various detection and sensing devices are paired with the ith target unitPerforming integration coding on the continuous tracking signal in the time period delta T to obtain original time-varying situation information TUΔT={Tu1,Tu2,…,TuNTherein oftutRepresenting original time-varying situation information at time T, wherein N refers to the number of targets detected in a time period delta T, and T is the time length of delta T;
step 2: for the original time-varying situation information TUΔTPerforming coding completion compression to obtain effective deep learning model input data TUΔT,PP represents a completion compression process;
and step 3: inputting effective deep learning model into data TUΔT,PInputting the information into a deep learning model to perform information representation learning and intention classification;
and 4, step 4: and obtaining a target intention recognition result.
Further, characterized in that the deep learning model comprises:
learners for characterizing underlying shared information among informative situational data acquired through various detecting and sensing devices;
the classifier is used for accurately identifying the fighting intention of the detected target at the current moment according to the bottom-layer shared information obtained by the learner at the current time interval;
and a controller for equitably distributing training attention between the learner and the classifier.
Further, the main components of the learner CPC include:
a time sequence processing model LSTM with variable length for obtaining the situation information TU of the extracted N detection target units in the current time period delta TΔT={Tu1,Tu2,…,TuNGet the result of coding, complementing and compressingWherein And (3) complementing and compressing the ith target unit to obtain a model input sequence under incomplete information, inputting the model input sequence into a variable length time sequence processing model LSTM, and performing fusion coding on the model input sequence to obtain a potential characterization coding sequence EU ═ { Eu ═ of the target unit situation informationt-T,…,Eut-1,Eut},EutPotential characterization codes output for the time t;
autoregressive model of GRU: effective time step information characterization for potential characterization code sequence EU of target unit situation informationTo perform a summary, ILrThe length of the original information before completion;
fully connected prediction layer: for characterizing summary features obtained based on an autoregressive modelBottom layer shared information SI between information situation data of previous time tt;
The three parts are subjected to joint optimization through Infonce loss, and a loss function L of the CPC model of the learner is definedCPC:
Wherein (Tu)p,t,SIt) Can be regarded as a positive sample pair, Tup,tShowing the situation information after the completion compression at the time t,can be regarded as a negative sample pair, f (Tu)p,t,SIt) Is the density ratio, f (Tu)p,t,SIt)=exp(EutSIt)。
Further, the model structure of the classifier is as follows: a variable long-term data processing model LSTM connected with a linear output layer, wherein the loss function adopts a basic cross entropy loss function:
wherein the content of the first and second substances,the target intention labels of the current time t of the N target units under the 'god vision',and performing intent recognition on all the detected targets to obtain a final inference result based on the potential characterization coding sequence EU learned by the learner.
Further, the model loss function of the classifier:
Lw=αLCPC+βLLSTM
alpha and beta respectively represent the weight parameters of the learner for characterizing learning and the weight parameters of the classifier for classifying learning.
Further, the completion compression processing is: the data of the target unit with the length less than delta T is subjected to 0 complementing processing, and the original data length IL of the target unit is recordedrThen based on ILrAnd carrying out mark compression on the completion data, wherein the deep learning model only calculates the non-completion data.
The invention also provides an online combat intention recognition device based on incomplete information, which comprises the following modules:
the information data acquisition module: for pairing the various detection and sensing devices to the ith target unit at the current time tPerforming integration coding on the continuous tracking signal in the time period delta T to obtain original time-varying situation information TUΔT={Tu1,Tu2,…,TuNTherein of tutRepresenting original time-varying situation information at time T, wherein N refers to the number of targets detected in a time period delta T, and T is the time length of delta T;
and a data processing completion module: for said historical time-varying situational information TUΔTPerforming complete compression to obtain effective deep learning model input data TUΔT,PP represents a completion compression process;
a learning classification module: for inputting data TU to deep learning model to be effectiveΔT,PInputting the information into a deep learning model to perform information representation learning and intention classification;
and an identification result output module: and the system is used for decoding and outputting the intention classification result obtained by the learning classification module.
By adopting the technical scheme, the invention has the following beneficial effects:
according to the online combat intention identification method and device based on incomplete information, incomplete battlefield information is completed, so that a time sequence data processing model LSTM can be used in batch processing, the applicability of the model is enhanced, computing resources and time are saved, and completed data is subjected to label compression to avoid result errors, so that the model only computes effective information, namely non-completed labeled data. The invention can excavate more global structures from limited battlefield information and discard low-level information and more local noise by using the learner CPC to learn the expression of potential shared information, designs a variable-length time sequence processing model LSTM to learn intention classification by comprehensively considering the time characteristics of target information, and then effectively combines the two on the basis of the weight value representing the training attention, thereby realizing the on-line intention recognition effect under incomplete information.
Drawings
FIG. 1 is a flow chart of the present invention for online intent recognition based on incomplete information;
FIG. 2 is a deep learning model framework diagram;
FIG. 3 is a comparison of intent recognition accuracy (a) and penalty (b) based on incomplete information of objective position ambiguity between LSTM and the deep learning model W-CPCLSTM of the present invention
FIG. 4 is a graph comparing intent recognition accuracy (a) and penalty value (b) of LSTM and W-CPCLSTM based on incomplete information with explicit goals;
FIG. 5 is a comparison of the intent recognition accuracy (a) and loss value (b) for each training generation of W-CPCLSTM of different training attentions, in the case of incomplete information where the target position is unclear;
FIG. 6 is a graph comparing the intent recognition accuracy (a) and penalty value (b) between LSTM and W-CPCLSTM (AB3) based on incomplete information with ambiguous target locations;
FIG. 7 is a graph comparing the intent recognition precision (a) and penalty (b) of LSTM and W-CPCLSTM based on the incomplete information of the target location in the clear;
FIG. 8 compares the intention recognition accuracy (a) and the loss value (b) between LSTM and W-CPCLSTM (AB5) based on incomplete information of the target type ambiguity.
FIG. 9 compares the accuracy (a) and loss value (b) of intent recognition by LSTM and W-CPCLSTM based on incomplete information that is unambiguous to the target class.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 to 9 show a specific embodiment of an online warfare intention identification method based on incomplete information according to the present invention, as shown in fig. 1, including the following steps:
step 1: at the current time t, various detection and sensing devices are paired with the ith target unitPerforming integration coding on the continuous tracking signal in the time period delta T to obtain original time-varying situation information TUΔT={Tu1,Tu2,…,TuNTherein oftutAnd original time-varying situation information representing time T, wherein N refers to the number of targets detected in a time period delta T, and T is the time length of delta T.
In this embodiment, the original time-varying situation information TU of each target unit in the time period Δ T ═ { T-1, …, T-1, T } is obtained by the probe deviceΔT={Tu1,Tu2,…,TuNAnd tracking targets in different starting times within the time period delta T, wherein the continuous observation time is different, and the acquired data lengths are inconsistent.
Step 2: for the original time-varying situation information TUΔTPerforming coding completion compression processingObtaining effective deep learning model input data TUΔT,PAnd p represents the completion compression processing.
In this embodiment, the completion compression processing means: the data of the target unit with the length less than delta T is subjected to 0 complementing processing, and the original data length IL of the target unit is recordedrThen based on ILrAnd carrying out mark compression on the completion data, wherein the deep learning model only calculates the non-completion data. The mark compression means that the original data length of the target unit is marked, so that the model only processes valid data, and compared with the length after 0 is supplemented, the processing on non-supplemented data is equivalent to compression processing. Because the acquired data lengths are different, in order to enhance the applicability of the model, the acquired data can be processed in batch, the data is complemented and integrated into the same length, but in order to save calculation resources and time and avoid result errors caused by the complemented data, the complemented data needs to be marked and compressed, so that the deep learning model only calculates the non-complemented data. In this embodiment, completion and compression processing is performed to obtain the final model input [ TUΔT,p∈RN×T×D,ILr∈RN]Wherein D is a feature number, and data having a uniform length is obtained by complementing the data and is input to the variable-length time-series processing model LSTM, but only non-complemented data is calculated in actual calculation.
And step 3: inputting effective deep learning model into data TUΔT,PInputting the information into a deep learning model to perform information representation learning and intention classification;
and 4, step 4: and obtaining and outputting a target intention recognition result.
Because the battlefield situation is a dynamic continuous evolution process and is the result of dynamic games of the enemy and the my, situation information from various reconnaissance detection and sensing means is mostly fuzzy and extremely uncertain, which brings great difficulty to situation information representation. Therefore, the invention firstly needs to synthesize all information to carry out characterization learning research on the situation information, and then carries out classification learning on the fighting intention of the target at the current moment based on the mined global structure and the historical tracking information.
The deep learning model W-CPCLSTM comprises: learners for characterizing underlying shared information among informative situational data acquired through various detecting and sensing devices; the classifier is used for accurately identifying the fighting intention of the detected target at the current moment according to the bottom-layer shared information obtained by the learner at the current time interval; and a controller for equitably distributing training attention between the learner and the classifier.
The main components of the learner CPC model in this embodiment include:
a time sequence processing model LSTM with variable length for obtaining the situation information TU of the extracted N detection target units in the current time period delta TΔT={Tu1,Tu2,…,TuNGet the result of coding, complementing and compressingWherein Inputting a sequence for complementing the compressed model of the ith target unit, inputting the sequence into a variable length time sequence processing model LSTM, and performing fusion coding on the sequence to obtain a potential characterization coding sequence EU ═ Eu { Eu } of the target unit situation informationt-T,…,Eut-1,Eut},EutAnd encoding potential characteristics output for the time t.
GRU autoregressive model: effective time step information characterization for potential characterization code sequence EU of target unit situation informationTo perform a summary, ILrIs the original information length; in this embodiment, the acquired information is subjected to complete compression based on the characteristics of the battlefield situation informationBecause the supplemented data is marked in the process of completing compression, the potential information containing the effective time step in the potential characterization coding sequence EU of the target unit situation information is characterized byThe GRU autoregressive model is used to summarize all potential characterizations containing global structure to generate context correlation information, and in this embodiment, in order to save computing resources and avoid computing errors, the GRU autoregressive model characterizes only the potential information of the effective time step as And (6) processing and summarizing.
Fully connected prediction layer: bottom layer shared information SI between information situation data for summarizing characteristic representation current moment t obtained based on autoregressive modelt(ii) a In this embodiment, the associated task indicated by the prediction layer composed of the fully-connected layers is not a potential representation of predicting a future time, but represents shared information SI of a current time based on situation information obtained at the present staget。
The three parts are subjected to joint optimization through Infonce loss, and a loss function L of the CPC model of the learner is definedCPC:
Wherein (Tu)p,t,SIt) Can be regarded as a positive sample pair, Tup,tShowing the situation information after the completion compression at the time t,can be regarded as a negative sample pair, f (Tu)p,t,SIt) Is density ratio, E [, ]]Indicating the desired function, f (Tu)p,t,SIt)=exp(EutSIt). To optimize the loss function, we want the numerator as large as possible and the denominator as small as possible. That is, it is desirable that mutual information between the positive sample pairs is larger and mutual information between the negative sample pairs is smaller. Optimizing this loss, in effect maximizing Tup,tAnd SItMutual information among the two; f (Tu)p,t,SIt) Is the density ratio by using the true code value EutAnd sharing information SItThe vector inner product of (a) to measure the similarity, the following approximation can be obtained: f (Tu)p,t,SIt)=exp(EutSIt)。
In this embodiment, the learner CPC includes three parts, a variable-length time-series data processing model LSTM, an autoregressive model that generalizes all potential characterizations encompassing global structure to generate context-dependent information, and a prediction layer that can indicate the task of association in the potential space.
In this embodiment, the model structure of the classifier in the deep learning model is: a variable long-term data processing model LSTM connected with a linear output layer, wherein the loss function adopts a basic cross entropy loss function:
wherein the content of the first and second substances,the target intention labels of the current time t of the N target units under the 'god vision',and performing intent recognition on all the detected targets to obtain a final inference result based on the potential characterization coding sequence EU learned by the learner.
In this embodiment, in order to further improve the recognition efficiency of the algorithm and reasonably distribute the training attention, we consider different weight parameters [ α, β ] for the characterization learning and the classification learning, and obtain a final model loss function: model loss function of controller:
Lw=αLCPC+βLLSTM
alpha and beta respectively represent the weight parameters of the learner for characterizing learning and the weight parameters of the classifier for classifying learning.
The invention can excavate more global structures from limited battlefield information and discard low-level information and more local noise by using the learner CPC to learn the expression of potential shared information, designs a variable-length time sequence processing model LSTM to learn intention classification by comprehensively considering the time characteristics of target unit intelligence information, and then effectively combines the two on the basis of the weight value representing the training attention, thereby realizing the on-line intention recognition effect under incomplete information.
FIG. 2 shows a deep learning model W-CPCLSTM framework diagram, given incomplete informationFirstly, the learner CPC composed of LSTM nonlinear encoder, GRU autoregressive model and fully-connected prediction layer is used for excavating global structure, then the LSTM model with variable length is used as classifier to train intention identification, and finally the weight parameter [ alpha, beta ] is based on]The two parts mentioned above are effectively combined, and the stable training of the model is realized through the reasonable distribution of training attention.
The invention also provides an online combat intention recognition device based on incomplete information, which comprises the following modules:
the information data acquisition module: for pairing the various detection and sensing devices to the ith target unit at the current time tPerforming integration coding on the continuous tracking signal in the time period delta T to obtain original time-varying situation information TUΔT={Tu1,Tu2,…,TuNTherein of tutRepresenting original time-varying situation information at time T, wherein N refers to the number of targets detected in a time period delta T, and T is the time length of delta T;
and a data processing completion module: for said historical time-varying situational information TUΔTPerforming complete compression to obtain effective deep learning model input data TUΔT,PP represents a completion compression process;
a learning classification module: for inputting data TU to deep learning model to be effectiveΔT,PInputting the information into a deep learning model to perform information representation learning and intention classification;
and an identification result output module: and the system is used for decoding and outputting the intention classification result obtained by the learning classification module.
The effectiveness of the present invention is verified by experiments below.
The simulation data set is from a deduction platform, the simulation data is divided into incomplete information and perfect information, the incomplete information and the perfect information are acquired by reconnaissance equipment such as a radar station and an unmanned aerial vehicle, the missing and wrong information is information, namely historical information of enemy weapons and enemy units detected by the enemy, and the simulation is enemy situation information identified in a real confrontation environment. The latter is the deduction data of action tracks, environments and time sequence events of all the combat units recorded by the system, namely all the actual fight information of my unit and my weapon under the view of the god.
The data set contains 15 intents from 45 classes and 12290 targets in total, the monitoring time of each target is different from 5-6000 seconds, the intention data set under incomplete information records 27 characteristics of the target at each moment, wherein the information about the target position is 3-dimensional, the information about the position is 4-dimensional, the type of the target is 2-dimensional, the action parameters are 5-dimensional, the equipment is related to 7-dimensional, and the detection events are 6-dimensional. While a perfect dataset records 65-dimensional features of these targets, including 45 equipment states and 7 task-related information. It is emphasized that for each detected object, the fighting intention is only one at the same time, but the fighting intention may change at any time as time progresses and the situation evolves.
During the deduction of the fight, the situation data is every delta tupUpdating once every moment, and carrying out time period division on a data set based on a formula defined as the following in order to simulate the online access state of the data set:
wherein iupRefers to the ith update of situation information.
For target data detected at each time period Δ T, 70% were classified into the training set and 30% were classified into the test set. After 5500 times of iterative training, the test is switched to. In order to accelerate the training speed and increase the stability, 50 samples form one batch, each 100 batches are trained once, each 10 times of training is carried out, 10 batches are randomly selected from a training set (during training) or a testing set (during testing) for verification, and the average value is used as the final recognition accuracy. In addition, the model considers five weight parameters [ α, β ] defined by table 1 to learn the influence of different training attentiveness ratios on the intention recognition effect.
TABLE 1 different weight parameters [ alpha, beta ] for characterization learning and classifier learning, respectively
Weight parameter | α | β |
AB1 | 0.1 | 0.9 |
AB2 | 0.9 | 0.1 |
AB3 | 0.3 | 0.7 |
AB4 | 0.7 | 0.3 |
AB5 | 0.5 | 0.5 |
And (3) identifying the intention of the target aiming at the information with the most three different detection degrees in the deduction process: 1) the unknown entity-target type is detected to be unknown, and the threat degree of the unknown entity-target type cannot be judged; 2) the target position is fuzzy and cannot be locked; 3) the target position is unknown, and the enemy can not be judged to be a friend.
Example I case where the object position is unknown
In a multi-party battle scene, the situation that a detected new target cannot be determined often occurs, and the situation that an enemy is a friend is unclear, which brings certain challenges to the intention identification task. Therefore, the present embodiment explores this problem to verify the applicability of the model of the present invention to this problem. First, in order to assign the proper training attention, intent recognition was performed based on the model weight parameters defined in table 1, and the result found AB3 to be the best training attention assignment among these five configurations. Secondly, the online intention recognition effect of the W-CPCLSTM model is verified by comparing with the traditional LSTM model based on the incomplete information which is not clear from the target position, and as shown in fig. 3(a) and 3(b), compared with the traditional LSTM model, the W-CPCLSTM model can greatly improve the accuracy of intention recognition with fewer iteration times. It is observed from fig. 3(a) that the feature characterization of CPC can greatly improve the efficiency of online intent recognition of enemy targets in the face of incomplete informative information about target position information. This observation can be confirmed when we compare the recognition accuracy and training trend of LSTM and W-CPCLSTM, respectively. In particular, it is clear from fig. 4(a) that the accuracy of the W-cpclsm model of the present invention can reach over 90% compared to the conventional LSTM with an accuracy of less than 80%, and fig. 4(b) tells us that the proposed model is more stable and reliable during the training process. This is valid evidence that the application of the learning structure by feature characterization gains advantages in the intent recognition task. Finally, the robustness of the two models to the disturbance is verified by complementing the detected position information, and the result shown in fig. 4(a) is obtained, so that it can be observed that when the target position is clear, the model of the invention stably improves the identification precision at lower cost. By adding information about the detected object position, the following two conclusions can be drawn: 1) as can be seen from fig. 4(a), W-cpclsstm still maintains significant advantages in recognition accuracy and training speed compared to the adversary LSTM. 2) Compared with the method shown in FIG. 3, it can be observed that after the target position information is determined, both the LSTM and the W-CPCLSTM have a certain degree of improvement in the identification accuracy, and besides, the identification stability of the LSTM and the W-CPCLSTM also have related improvement, however, the identification effect of the W-CPCLSTM model is not obviously influenced compared with the LSTM if the target position information is not detected, which shows that the model of the invention is relatively insensitive in the face of information disturbance of the target position, namely, the characteristic characterization learning is helpful for the model to dig out more effective characteristics or global structures in the face of limited fighting situation information.
Example two: the target position is unknown
In the battlefield of confrontation, the characteristics of the concealment of each party, the deception of each party, the limitation of detection equipment, the time delay and the like cause that the exact position of the target is difficult to track in real time, and many tactical intentions are closely related to the coordinate position of the target. In the absence of this important information, how to reasonably distribute training attention so that the model remains stable output? In the face of situation information in which the detection of the target position is unknown, can the model of the present invention continue to maintain the superiority in the battle intention recognition task? What feedback the intention model will come back to if the location information from the detecting device is available?
The present invention employs AB3, the best of these five configurations to train attention distribution. As can be seen from fig. 5(a) and 5(b), the W-CPCLSTM model proposed by the present invention has a significant advantage in classification accuracy compared to the conventional LSTM. As can be seen in fig. 6(a) and 6(b), the proposed model steadily improves the accuracy of intent recognition at a lower cost than the conventional LSTM when the target location is clear. In addition, no matter the coordinate characteristics of the target are lacked in fig. 5 or the relevant information of the target position is detected in fig. 6, the model of the invention is superior to LSTM in training stability, recognition accuracy and convergence speed, and the effective combination of the characterization learning and the time sequence classification learning is proved to be capable of contributing to the improvement of the online intention recognition efficiency. Secondly, compared to fig. 5, it can be observed from fig. 6 that after the relevant intelligence of the target position is supplemented, the training speed and the recognition accuracy of both LSTM and W-cpclsstm are improved to some extent, even if the latter is not so obvious. Finally, combining fig. 3 and fig. 5, it can be seen that no matter there is no information from the standpoint or there is a loss of location information, the model of the present invention can more robustly identify the intent compared to LSTM, which illustrates that the characterization learning structure can effectively overcome the fluctuation caused by the change of information in the countermeasure process.
EXAMPLE III unknown object types
In the battle process, a detected enemy target is often accompanied by a suspected track in the beginning, the specific target type of the detected enemy target can be confirmed by continuous tracking for a period of time, and how to carry out efficient online intention identification on the detected unknown target is important content to be explored in modern information-based war. In this example, the proposed algorithm is intended to identify effects in the face of situation information of unknown object types, verified by comparison with the conventional LSTM model. And finally, exploring the contribution degree of the target type intelligence to the intention recognition of the two models.
In the face of incomplete situation information of unknown target types, the example adopts AB5 with the best recognition efficiency as the optimal training parameter configuration to compare with the LSTM network with the best effect, and the experimental results of the graphs in FIGS. 7(a) and 7(b) show that the W-CPCLSTM has the advantage of greatly exceeding the recognition accuracy of the opponent by less than 80% obviously, and has obvious advantages in training stability and speed. Likewise, even with the addition of probe information for the target type, the model of the present invention retains its consistent advantages over LSTM in recognition accuracy given in fig. 8(a) and loss values given in fig. 8 (b). Furthermore, by comparison with FIG. 8, it can be seen that when no target information is detected (FIG. 7), the model of the present invention does not change as dramatically as LSTM does, either with respect to recognition accuracy or training stability. These all prove again that combining the feature learning capability of CPC and the timing information mining capability of LSTM based on training attention allocation can be a powerful tool for incomplete information online intent prediction.
Example four, Perfect situation
It can be seen from the above experiments that, in the face of incomplete situations of various detection degrees, the model of the present invention has obvious advantages in online intention recognition tasks compared to the conventional LSTM, but, in the case of giving "god vision", that is, true situation information in an ideal state, in the face of perfect situations without any errors or loss and with more target information related to task actions, does the proposed model maintain the intention recognition advantages brought by characterizing learning ability and time sequence information mining ability? The invention selects the optimal parameters in the configuration of AB3 by comprehensively considering the recognition precision and the training stability. It is obvious from the comparison between LSTM and W-cpclsstm shown in fig. 9(a) and 9(b) that although the input information is sufficiently complete, the model can still learn and mine more effective information through the representation of the situation features, so that the accuracy of the intention recognition, the convergence rate of the training, or the stability of the output can be a certain advantage in the situation recognition field. Of course, compared with the identification result of incomplete information, it can be seen that, no matter the LSTM or the W-CPCLSTM is adopted, the effective input information can greatly improve the identification efficiency.
Example five different target tracking times
Theoretically, as the target tracking time is prolonged, more information is obtained, a 'sketch image' of the target is clearer, and the intention of the 'sketch image' is more clear. In addition, in order to more clearly evaluate the practicability and effectiveness of the proposed W-CPCLSTM in the battle intention recognition task, the accuracy, standard deviation, effective speed and the like of the proposed W-CPCLSTM and the traditional time sequence processing model LSTM are evaluated on a training set and a testing set, and the obtained results are shown in Table 2.
TABLE 2 LSTM and W-CPCLSTM intent recognition results on training and test sets in terms of accuracy, standard deviation and speed based on perfect situation and incomplete intelligence information
1Tr is a training set;
2ts is a test set;
3acc, intention identification accuracy, highest average accuracy obtained during the whole exercise process;
4std is standard deviation, standard deviation of highest average precision;
5spe, training speed, characterized as the number of iterations required to achieve an accuracy of 80% (90% in perfect situation) throughout the training process;
6l: length of intelligence data;
7not reaching the required precision;
it can be seen from table 2 that given target intelligence, whether there is a target type or there is a lack of information about the target type, it is unexpected that, for the LSTM model, as the time sequence length goes from 10, to 20, and further to 30, the recognition accuracy is not improved greatly in the training set and the testing set, and even there is a downward trend. The same conclusion is also verified on the W-CPCLSTM model of the invention, and the change is less obvious, which shows that as the time length is increased, although the effective information is more, the obtained false information and the error information are increased, so that the identification result is not delayed. In addition, it can be seen that the recognition accuracy of the model of the present invention on both training and test sets can exceed LSTM 7% -11% and the training speed to achieve effective accuracy can be 5-32 times faster than the other party, regardless of the location information, position information, or target type. From the conclusions, the model disclosed by the invention can more stably and quickly realize online intention identification with higher precision in the face of incomplete information, and is expected to become a powerful tool in the situation cognition field.
The invention provides a W-CPCLSTM model which is provided by researching the time sequence characteristics of information and situation characteristics of an online intention recognition task under the condition of incomplete information and comprises the following three parts: the CPC model aiming at the characteristics of incompleteness, deception and the like of intelligence data considers the variable-length LSTM model of the sequence characteristics of situation characteristics and the training attention weight value considering the recognition stability. The CPC model excavates a global structure from limited information data through characteristic representation learning, the variable-length LSTM model carries out intention classification training through a sequence mechanism for analyzing characteristic representation, and the training attention weight effectively combines the former two through reasonably distributing training attention to carry out stable intention recognition.
In order to verify the intention recognition effect of the proposed model in the face of incomplete information in the countermeasure process, the invention respectively carries out performance analysis and application evaluation on the model by comparing with LSTM based on three common situations of target position, position and type undetected; meanwhile, for the purpose of comprehensively testing the practicability and the effectiveness of the proposed algorithm, the perfect situation under the view angle of god is given, and the experimental analysis of online intention identification is carried out based on the ideal information; and finally, the influence effect of incomplete information on the model is discussed by intercepting the detection information with different lengths. All experiments show that the perfect model can improve the recognition accuracy by 7% to 11%, the recognition speed can be improved by 6 to 32 times, and the stability is excellent, which indicates that the model is expected to become effective assistance in the intention recognition task.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (7)
1. An on-line battle intention identification method based on incomplete information is characterized in that,
the method comprises the following steps:
step 1: at the current time t, various detection and sensing devices are paired with the ith target unitPerforming integration coding on the continuous tracking signal in the time period delta T to obtain original time-varying situation information TUΔT={Tu1,Tu2,…,TuNTherein oftutRepresenting original time-varying situation information at time T, wherein N refers to the number of targets detected in a time period delta T, and T is the time length of delta T;
step 2: to pairThe original time-varying situation information TUΔTPerforming coding completion compression to obtain effective deep learning model input data TUΔT,PP represents a completion compression process;
and step 3: inputting effective deep learning model into data TUΔT,PInputting the information into a deep learning model to perform information representation learning and intention classification;
and 4, step 4: and decoding and outputting to obtain a target intention recognition result.
2. The online fighting intent recognition method of claim 1, wherein the deep learning model comprises:
learners for characterizing underlying shared information among informative situational data acquired through various detecting and sensing devices;
the classifier is used for accurately identifying the fighting intention of the detected target at the current moment according to the bottom-layer shared information obtained by the learner at the current time interval;
and a controller for equitably distributing training attention between the learner and the classifier.
3. The on-line fighting intent recognition method of claim 2 wherein the main components of the learner CPC include:
a time sequence processing model LSTM with variable length for obtaining the situation information TU of the extracted N detection target units in the current time period delta TΔT={Tu1,Tu2,…,TuNGet the result of coding, complementing and compressingWherein And (3) complementing and compressing the ith target unit to obtain a model input sequence under incomplete information, inputting the model input sequence into a variable length time sequence processing model LSTM, and performing fusion coding on the model input sequence to obtain a potential characterization coding sequence EU ═ EU { EU of the target unit situation informationt-T,…,Eut-1,Eut},EutPotential characterization codes output for the time t;
autoregressive model of GRU: effective time step information characterization for potential characterization code sequence EU of target unit situation informationTo perform a summary, ILrThe length of the original information before completion;
fully connected prediction layer: bottom layer shared information SI between information situation data for summarizing characteristic representation current moment t obtained based on autoregressive modelt;
The three parts are subjected to joint optimization through Infonce loss, and a loss function L of the CPC model of the learner is definedCPC:
4. The on-line fighting intention identifying method according to claim 3, wherein the model structure of the classifier is: a variable long-term data processing model LSTM connected with a linear output layer, wherein the loss function adopts a basic cross entropy loss function:
wherein the content of the first and second substances,the target intention labels of the current time t of the N target units under the 'god vision',and performing intent recognition on all the detected targets to obtain a final inference result based on the potential characterization coding sequence EU learned by the learner.
5. The on-line fighting intent recognition method of claim 4, wherein the model loss function of the controller:
Lw=αLCPC+βLLSTM
alpha and beta respectively represent the weight parameters of the learner for characterizing learning and the weight parameters of the classifier for classifying learning.
6. The method for identifying online combat intention according to any one of claims 1 to 5, wherein the completion compression processing is: the data of the target unit with the length less than delta T is subjected to 0 complementing processing, and the original data length IL of the target unit is recordedrThen based on ILrAnd carrying out mark compression on the completion data, wherein the deep learning model only calculates the non-completion data.
7. An online combat intention recognition device based on incomplete information is characterized by comprising the following modules:
the information data acquisition module: for pairing the various detection and sensing devices to the ith target unit at the current time tPerforming integration coding on the continuous tracking signal in the time period delta T to obtain original time-varying situation information TUΔT={Tu1,Tu2,…,TuNTherein of tutRepresenting original time-varying situation information at time T, wherein N refers to the number of targets detected in a time period delta T, and T is the time length of delta T;
and a data processing completion module: for said historical time-varying situational information TUΔTPerforming complete compression to obtain effective deep learning model input data TUΔT,PP represents a completion compression process;
a learning classification module: for inputting data TU to deep learning model to be effectiveΔT,PInputting the information into a deep learning model to perform information representation learning and intention classification;
and an identification result output module: and the system is used for decoding and outputting the intention classification result obtained by the learning classification module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111041309.1A CN113743509B (en) | 2021-09-07 | 2021-09-07 | Online combat intent recognition method and device based on incomplete information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111041309.1A CN113743509B (en) | 2021-09-07 | 2021-09-07 | Online combat intent recognition method and device based on incomplete information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113743509A true CN113743509A (en) | 2021-12-03 |
CN113743509B CN113743509B (en) | 2024-02-06 |
Family
ID=78736269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111041309.1A Active CN113743509B (en) | 2021-09-07 | 2021-09-07 | Online combat intent recognition method and device based on incomplete information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113743509B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114139550A (en) * | 2022-02-08 | 2022-03-04 | 中国电子科技集团公司第五十四研究所 | Situation intelligent cognition method based on activity semantic text message |
CN114818853A (en) * | 2022-03-10 | 2022-07-29 | 中国人民解放军空军工程大学 | Intention identification method based on bidirectional gating cycle unit and conditional random field |
CN115481702A (en) * | 2022-10-28 | 2022-12-16 | 中国人民解放军国防科技大学 | Predictive comparison characterization method for multi-element time series data processing |
CN116227952A (en) * | 2023-05-09 | 2023-06-06 | 中国人民解放军海军潜艇学院 | Method and device for selecting attack target defense strategy under key information deficiency |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329348A (en) * | 2020-11-06 | 2021-02-05 | 东北大学 | Intelligent decision-making method for military countermeasure game under incomplete information condition |
CN112749761A (en) * | 2021-01-22 | 2021-05-04 | 上海机电工程研究所 | Enemy combat intention identification method and system based on attention mechanism and recurrent neural network |
-
2021
- 2021-09-07 CN CN202111041309.1A patent/CN113743509B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329348A (en) * | 2020-11-06 | 2021-02-05 | 东北大学 | Intelligent decision-making method for military countermeasure game under incomplete information condition |
CN112749761A (en) * | 2021-01-22 | 2021-05-04 | 上海机电工程研究所 | Enemy combat intention identification method and system based on attention mechanism and recurrent neural network |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114139550A (en) * | 2022-02-08 | 2022-03-04 | 中国电子科技集团公司第五十四研究所 | Situation intelligent cognition method based on activity semantic text message |
CN114818853A (en) * | 2022-03-10 | 2022-07-29 | 中国人民解放军空军工程大学 | Intention identification method based on bidirectional gating cycle unit and conditional random field |
CN114818853B (en) * | 2022-03-10 | 2024-04-12 | 中国人民解放军空军工程大学 | Intention recognition method based on bidirectional gating circulating unit and conditional random field |
CN115481702A (en) * | 2022-10-28 | 2022-12-16 | 中国人民解放军国防科技大学 | Predictive comparison characterization method for multi-element time series data processing |
US11882299B1 (en) | 2022-10-28 | 2024-01-23 | National University Of Defense Technology | Predictive contrastive representation method for multivariate time-series data processing |
CN116227952A (en) * | 2023-05-09 | 2023-06-06 | 中国人民解放军海军潜艇学院 | Method and device for selecting attack target defense strategy under key information deficiency |
CN116227952B (en) * | 2023-05-09 | 2023-07-25 | 中国人民解放军海军潜艇学院 | Method and device for selecting attack target defense strategy under key information deficiency |
Also Published As
Publication number | Publication date |
---|---|
CN113743509B (en) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113743509A (en) | Incomplete information-based online combat intention identification method and device | |
Schulz et al. | People tracking with anonymous and id-sensors using rao-blackwellised particle filters | |
CN108647414B (en) | Combat plan adaptability analysis method based on simulation experiment and storage medium | |
Perry et al. | Exploring information superiority: a methodology for measuring the quality of information and its impact on shared awareness | |
Saha et al. | Distributed prognostic health management with Gaussian process regression | |
Su et al. | Generalized decision aggregation in distributed sensing systems | |
Hinman | Some computational approaches for situation assessment and impact assessment | |
Matuszewski et al. | Neural network application for emitter identification | |
Matuszewski | Radar signal identification using a neural network and pattern recognition methods | |
Chen et al. | Online intention recognition with incomplete information based on a weighted contrastive predictive coding model in wargame | |
CN114519190A (en) | Multi-target network security dynamic evaluation method based on Bayesian network attack graph | |
CN114818734B (en) | Method and device for analyzing antagonism scene semantics based on target-attribute-relation | |
Chen et al. | Testing the structure of a Gaussian graphical model with reduced transmissions in a distributed setting | |
Li et al. | A generalized labelled multi-Bernoulli filter for extended targets with unknown clutter rate and detection profile | |
Song et al. | Intrusion detection using federated attention neural network for edge enabled internet of things | |
CN116432514A (en) | Interception intention recognition strategy simulation system and method for unmanned aerial vehicle attack and defense game | |
Johansson et al. | Implementation and integration of a Bayesian Network for prediction of tactical intention into a ground target simulator | |
Baldvinsson et al. | Il-gan: Rare sample generation via incremental learning in gans | |
Yu et al. | Low-altitude Slow Small Target Threat Assessment Algorithm by Exploiting Sequential Multi-Feature with Long-Short-Term-Memory | |
CN111811515A (en) | Multi-target track extraction method based on Gaussian mixture probability hypothesis density filter | |
Wright et al. | Use of domain knowledge models to recognize cooperative force activities | |
Glinton et al. | A markov random field model of context for high-level information fusion | |
Hershey | Analytics and simulation for decision support: good results achieved by teaming the two | |
CN104518756A (en) | Corrective particle filter based on PHD (probability hypothesis density) | |
Kolpaczki et al. | Identifying Top-k Players in Cooperative Games via Shapley Bandits. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |