CN107871138B - Target intention identification method based on improved D-S evidence theory - Google Patents

Target intention identification method based on improved D-S evidence theory Download PDF

Info

Publication number
CN107871138B
CN107871138B CN201711053015.4A CN201711053015A CN107871138B CN 107871138 B CN107871138 B CN 107871138B CN 201711053015 A CN201711053015 A CN 201711053015A CN 107871138 B CN107871138 B CN 107871138B
Authority
CN
China
Prior art keywords
target
radars
radar
evidence
intention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711053015.4A
Other languages
Chinese (zh)
Other versions
CN107871138A (en
Inventor
张天贤
时巧
孟令同
汪子钦
崔国龙
孔令讲
杨晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201711053015.4A priority Critical patent/CN107871138B/en
Publication of CN107871138A publication Critical patent/CN107871138A/en
Application granted granted Critical
Publication of CN107871138B publication Critical patent/CN107871138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/006Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/257Belief theory, e.g. Dempster-Shafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

The invention discloses a target intention recognition method based on an improved D-S evidence theory, belongs to the technical field of radar, and relates to a networking radar target intention recognition technology. The method solves the problem that the target intention of the networking radar cannot be effectively identified by the traditional D-S evidence theory when the evidence acquired by different radars needs to be comprehensively considered and the different evidences conflict with each other. The method is characterized in that data received by a networking radar is preprocessed, basic probability distribution of each evidence is obtained based on the similarity between intentions and the evidence, finally, the correlation between the evidences is considered, and the similarity coefficient between the evidences is introduced in evidence conflict processing and D-S evidence synthesis. The method can eliminate abnormal data in the data received by the networking radar, reduce the adverse effect of the abnormal data on the target intention recognition, effectively solve the problem of evidence conflict in practical application and improve the target intention recognition performance.

Description

Target intention identification method based on improved D-S evidence theory
Technical Field
The invention belongs to the technical field of radar, and relates to a networking radar target intention identification technology.
Background
The target intent identifies a process that infers the purpose of the target or predicts its future action from incomplete information obtained. Object intent identification plays a critical role in battlefield situation assessment and is also the basis for threat assessment. Whether the target intention can be correctly identified or not relates to the situation change of the whole battlefield, so that the target intention identification technology is a key research subject of domestic and foreign experts at present.
The actually acquired target incomplete information contains high randomness and uncertainty. The traditional Bayes probability theory is difficult to describe the irregular uncertainty, and the D-S evidence theory has the capability of solving uncertain factors and only needs to meet the weaker condition than the Bayes probability theory, so that the D-S evidence theory is widely applied to target intention recognition. In the literature "Research on registration Technique of Target reactive indications in Sea Battlefield, 2012Fifth International Symposium on comparative indications and Design, 2012: 130-. The result obtained by fusing the highly conflicting evidences by using the D-S evidence theory is often contrary to the conventional theory. Therefore, the scholars at home and abroad have conducted a great deal of research on the improvement of the D-S evidence theory. In the literature, "a new evidence expression model and its application in enemy combat intention identification, command control and simulation, 2006, 28 (6): 9-13 ", an improved D-S evidence theory method is proposed and applied to the intention identification of enemy battles. However, neither of the above documents considers its application in a networked radar. However, in recent years, networking radars are more and more widely applied to radar, research on networking radars is urgent, and recognition of a target intention by using the networking radars is more and more emphasized by researchers.
Disclosure of Invention
The invention aims to research and design a target intention identification method based on an improved D-S evidence theory aiming at the defects in the background art, and solve the problem that the traditional D-S evidence theory cannot effectively identify the target intention when the existing evidences conflict with each other.
The solution of the invention is that firstly, data received by each radar is preprocessed, secondly, the basic probability distribution of each evidence is obtained by utilizing the similarity between the intention and the evidence, secondly, the evidence is subjected to conflict processing based on the similarity coefficient between the evidences, and finally, each evidence is synthesized by utilizing a D-S evidence theory to obtain the final probability distribution and judge the target intention. The method effectively solves the problem of evidence conflict in practical application and improves the target intention identification performance.
For the convenience of describing the contents of the present invention, the following terms are first explained:
the term 1: event(s)
The various parameters of the object detected by different radars or by the same radar at different points in time constitute different events.
The term 2: properties
The attribute refers to each parameter of the detected object.
The term 3: standard value
Each intention has a most suitable target state as a basis, and the target state is called a standard value of the intention. The standard value is a value (which may be a fixed value or a value range) given by an expert system according to past experience in a specific background.
The term 4: identification framework
The recognition framework Θ is a complete set of all possible answers to a question. All subsets of which together form a power set of theta, denoted 2Θ
The term 5: mass function
A basic probability distribution over the framework, BPA for short, is identified. BPA on the recognition framework Θ is a 2Θ→[0,1]Is also called a mass function.
The invention provides a target intention identification method based on an improved D-S evidence theory, which comprises the following steps:
step 1: the pre-processing of the data is carried out,
the position of the target T is marked as (x, y, z), wherein x, y and z are respectively an x-axis coordinate, a y-axis coordinate and a z-axis coordinate of the target; three radars RiAnd the position of i-1, 2,3 is marked as (x)i,yi,zi) I is 1,2,3, wherein xi、yi、ziI is 1,2 and 3 are x-axis coordinates, y-axis coordinates and z-axis coordinates of the three radars respectively; the distances between the three radars are:
Figure BDA0001453322020000021
the distance from the target to the three radars is recorded as riI is 1,2,3, and the radial velocity of the target detected by each radar is respectively recorded as viI is 1,2, 3; the geometric relationship can obtain the included angle theta formed by any two radars and the targetij,i、j=1,2,3,i≠j:
Figure BDA0001453322020000022
For the condition that all radars are positioned on the same plane and the speed direction of the target is coplanar with the plane where the radars are positioned; the target speed is recorded as v, and two radars R are selected1,R2Target speed and radar R1,R2The angle of the radial velocity of the detected object is recorded as beta12Coexisting in three topologies;
for topology one: target velocity v at radar R1Left of the radial velocity of the detected target; then from the geometric relationship:
Figure BDA0001453322020000031
for topology two: the target speed v is between the radial speed included angles of the targets detected by the two radars; then from the geometric relationship:
Figure BDA0001453322020000032
for topology three: target velocity v at radar R2Right of the detected radial velocity of the target; then from the geometric relationship:
Figure BDA0001453322020000033
then when the data from any two radars is substituted into equations (3), (4), (5), the result is the positive one β1Is the correct beta1Accordingly, is positive beta2Is the correct beta2(ii) a In the networking radar, two radars are randomly selected to deduce the target speed by utilizing the radial speed of the target detected by the radars, different radars are selected to carry out mutual verification because the target speed is a fixed value, and the radar which is different from the target speed deduced by combining all the radars and any two other radars is found out, namely the radar with abnormality exists;
for the condition that the radial speed of a part of the targets detected by all the radars is coplanar with the target speed, and the other part of the targets is not in the same plane with the target speed; the target speed is denoted v and is related to the radar R1,R2Coplanar, target speed and radar R1,R2The angle of the measured radial velocity of the target is recorded as beta12Another arbitrary radar R3And the included angle between the radial speed of the detected target and the speed of the target is recorded as beta3Coexisting in three topologies; the extension line of the target speed is crossed with the radar R1,R2Line segment R formed by connecting lines1R2At H, the line connecting the target T and H is denoted as TH, radar R1The line connecting H is denoted as R1H, radar R3The line connecting H is denoted as R3H, target T and radar R1,R2The included angle formed is marked as ≈ TR1R2=γ1The included angle formed by the three radars is recorded as ≈ R2R1R3=α1(ii) a Then from the geometric relationship:
Figure BDA0001453322020000041
for topology one: target velocity v at radar R1Left of the radial velocity of the detected target; obtaining the target speed and the included angle beta from the formula (3)12(ii) a Then from the geometric relationship:
Figure BDA0001453322020000042
Figure BDA0001453322020000043
for topology two: target velocity v at two radars R1、R2The included angle of the radial speed of the detected target; obtaining the target speed and the included angle beta from the formula (4)12(ii) a Then from the geometric relationship:
Figure BDA0001453322020000044
Figure BDA0001453322020000045
for topology three: target velocity v at radar R2Right of the detected radial velocity of the target; obtaining the target speed and the included angle beta from the formula (5)12(ii) a Then from the geometric relationship:
Figure BDA0001453322020000046
Figure BDA0001453322020000047
TH and R can be obtained by solving the three models3H, then from the geometric relationship:
Figure BDA0001453322020000048
vcosβ3=v3 (14)
thereby passing through v3A target speed v can be deduced; by applying radar R1,R2Combining with different radars, wherein each combination can derive a target speed, finding out radars with different target speeds obtained by other combinations in the derived target speeds through comparison, namely, radars with problems, if the radars are radars used for identifying the target intention, correcting the data of the radars, and otherwise, removing the data of the radars;
step 2: the data is normalized by the normalization method,
the number of intentions the target has is denoted as L, and the ith intention is denoted as ylL-1, 2, …, L, and putting all intents in the same space results in the target tactical intention space Y-Y (Y)1,y2,…,yL) (ii) a If the number of the detected events in a period of time is E and the number of the attributes of the target is N, the nth attribute value of the E event is recorded as xen1,2, … E, N1, 2, … N; combining values of all attributes of the e-th event to form a target feature vector Xe=(xe1,xe2,…xen,…,xeN) (ii) a Target has intention ylThe feature vector X of the time is called the standard feature vector
Figure BDA0001453322020000051
Standard feature vector
Figure BDA0001453322020000052
The form is as follows:
Figure BDA0001453322020000053
standard value
Figure BDA0001453322020000054
Component (b) of
Figure BDA0001453322020000055
Representing an attribute n corresponding to an intention ylA standard value of time;
adopting a Min-Max standardization method to carry out the value x of the nth attribute of the e event of the original dataenMapping to a uniform range; let minnIs the minimum value of the attribute n, maxnIs the maximum value of the attribute n, and all the intervals after the attribute conversion are unified as [0,1 ]]The value obtained by conversion is recorded as
Figure BDA0001453322020000056
The formula is as follows:
Figure BDA0001453322020000057
the standard value of the nth attribute of the ith intention is compared in a similar way
Figure BDA0001453322020000058
Is converted into
Figure BDA0001453322020000059
The formula is as follows:
Figure BDA00014533220200000510
and step 3: the degree to which the target state is similar to each intent,
using the formulas (16) and (17) to convert the target current feature vector XeStandard vector corresponding to target intention space
Figure BDA00014533220200000511
The data is processed in a standardized way to obtain
Figure BDA00014533220200000512
Then, calculating the similarity between the target characteristic vector and the standard vector by using the formula (18); the calculation formula is as follows:
Figure BDA00014533220200000513
when the standard value is reached
Figure BDA00014533220200000514
When a value of 0 or 1 is taken, e.g.
Figure BDA00014533220200000515
Figure BDA00014533220200000516
Otherwise
Figure BDA00014533220200000517
When the standard value is reached
Figure BDA00014533220200000518
When the values are discrete and not 0 or 1,
Figure BDA00014533220200000519
when the standard value is reached
Figure BDA0001453322020000061
Values are continuous and interval is [ a, b]When the temperature of the water is higher than the set temperature,
Figure BDA0001453322020000062
Figure BDA0001453322020000063
wherein N is the number of attributes;
will be provided with
Figure BDA0001453322020000064
Is marked as helThat is, the similarity between the e-th event and the l-th intention is represented, so as to obtain a similarity matrix H between each event and each intention:
Figure BDA0001453322020000065
and 4, step 4: the probability density distribution function is a function of,
normalizing the similarity matrix H according to rows to obtain a basic probability distribution function m of each evidencee(·):
Figure BDA0001453322020000066
Wherein
Figure BDA0001453322020000067
I.e., the row and column of the similarity matrix H of equation (20);
and 5: the conflict is processed, and the processing is carried out,
two evidences E under the recognition framework Θ1,E2The basic probability distribution function is m1(. and m)2(. o) each jiao Yuan is AiAnd BjThen evidence E1、E2The similarity coefficient between can be expressed as:
Figure BDA0001453322020000068
setting the number of the common evidences as E, and calculating a similarity coefficient between any two evidences to obtain a similarity matrix S; adding each row of the similar matrix S to obtain evidence E of each evidence pairiThen normalizing the support degree of the evidence to obtain the credibility Cr (m) of the evidencei) (ii) a Take it as evidence EiObtaining the weighted basic probability distribution of each evidence;
the basic probability distribution of evidence is averaged:
Figure BDA0001453322020000071
step 6: D-S evidence synthesis and intention judgment,
synthesizing the basic probability distribution of the evidence obtained by the formula (23) by the following formula;
Figure BDA0001453322020000072
wherein
Figure BDA0001453322020000073
The resultant m (C) is reacted with m (A) obtained by the formula (23)i) Continuing the synthesis of the formula (24), and continuing the synthesis with m (A) obtained by the formula (23)i) Performing the synthesis of formula (24) in cycles until completion of the E-2 syntheses; and obtaining final probability distribution of each intention, wherein the intention corresponding to the maximum probability value is the target intention.
The invention has the beneficial effects that: according to the method, data received by each radar are preprocessed, basic probability distribution of each evidence is obtained through the similarity between the intentions and the evidence, conflict processing is conducted on the evidence based on the similarity coefficient between the evidences, and finally the evidences are synthesized through a D-S evidence theory to obtain final probability distribution of each intention and judge the target intention, so that the problem that the target intention cannot be effectively identified through the traditional D-S evidence theory when the evidences conflict is effectively solved. The invention has the advantages that whether each evidence conflicts or not is not judged, the algorithm complexity is reduced, and the target intention identification performance is improved. The invention can be applied to the fields of civil military and the like.
Drawings
FIG. 1 is a general block diagram of the method provided by the present invention.
FIG. 2 is a block flow diagram of a method provided by the present invention.
FIG. 3 is a block diagram of the D-S evidence synthesis employed in the present invention.
FIG. 4 is a schematic of the topology of a radar target for data preprocessing employed by the present invention.
Fig. 5 is a schematic diagram of three topologies corresponding to the case where all radars are located on the same plane and the speed direction of the target is coplanar with the plane on which the radars are located in the data preprocessing employed by the present invention.
Fig. 6 is a schematic diagram of three topologies, in which the radial velocity of the target detected by a part of all radars in the data preprocessing adopted by the invention is coplanar with the target velocity, and the other part corresponds to the case that the target velocity is not in the same plane.
FIG. 7 is a simulation result of target intent recognition by directly performing D-S evidence theoretical synthesis without data preprocessing in the embodiment of the present invention.
FIG. 8 is a simulation result of target intent recognition based on improved D-S evidence theory synthesis without data preprocessing in an embodiment of the present invention.
FIG. 9 is a simulation result of target intent recognition by direct D-S evidence theory synthesis after data preprocessing in an embodiment of the present invention.
FIG. 10 is a simulation result of target intent recognition based on improved D-S evidence theory synthesis after data preprocessing in an embodiment of the present invention.
Detailed Description
The invention mainly adopts a simulation experiment method for verification, and all the steps and conclusions are verified to be correct on Matlab 2012. The present invention will be described in further detail with reference to specific embodiments.
The method comprises the following steps: the pre-processing of the data is carried out,
the position of the target T is denoted as (x, y, z), where x, y, z are the x-axis coordinate, y-axis coordinate, and z-axis coordinate of the target, respectively. Three radars RiAnd the position of i-1, 2,3 is marked as (x)i,yi,zi) I is 1,2,3, wherein xi、yi、ziAnd i is 1,2 and 3 are x-axis coordinates, y-axis coordinates and z-axis coordinates of the three radars respectively. The distances between the three radars are:
Figure BDA0001453322020000081
the distance from the target to the three radars is recorded as riI is 1,2,3, and the radial velocity of the target detected by each radar is respectively recorded as viAnd i is 1,2 and 3. The geometric relationship can obtain the included angle theta formed by any two radars and the targetij=∠RiTRj,i,j=1,2,3,i≠j:
Figure BDA0001453322020000082
For the case where all radars are in the same plane and the direction of the target's velocity is coplanar with the plane in which the radars are located. The target speed is recorded as v, and two radars R are selected1,R2Target speed and radar R1,R2The angle of the radial velocity of the detected object is recorded as beta12There are three topologies.
For topology one: target velocity v at radar R1Left side of the radial velocity of the detected target. Then from the geometric relationship:
Figure BDA0001453322020000083
for topology two: the target velocity v is between the radial velocity angles of the targets detected by the two radars. Then from the geometric relationship:
Figure BDA0001453322020000091
for topology three: target velocity v at radar R2To the right of the radial velocity of the detected target. Then from the geometric relationship:
Figure BDA0001453322020000092
then when we use the data of any two radars to substitute into equations (27), (28), (29), the result is the positive one β1Is the correct beta1Accordingly, is positive beta2Is the correct beta2. In the networking radar, the target speed can be deduced by randomly selecting two radars according to the radial speed of the target detected by the two radars, and because the target speed is a fixed value, different radars can be selected for mutual verification, and the radar which has the target speed deduced by combining with all the radars and the radar which has the target speed deduced by combining with any two other radars is found, namely the radar with the abnormality, if the radar is the radar for identifying the target intention, the data of the radar is corrected, otherwise, the data of the radar is removed.
For all radars, the radial velocity of a part of the targets detected by the radars is coplanar with the target velocity, and the other part of the radars is not coplanar with the target velocity. The target speed is denoted v and is related to the radar R1,R2Coplanar, target speed and radar R1,R2The angle of the measured radial velocity of the target is recorded as beta12Another arbitrary radar R3And the included angle between the radial speed of the detected target and the speed of the target is recorded as beta3There are three topologies. The extension line of the target speed is crossed with the radar R1,R2Line segment R formed by connecting lines1R2At H, the line connecting the target T and H is denoted as TH, radar R1The line connecting H is denoted as R1H, radar R3The line connecting H is denoted as R3H, target T and radar R1,R2The included angle formed is marked as ≈ TR1R2=γ1The included angle formed by the three radars is recorded as ≈ R2R1R3=α1. Then from the geometric relationship:
Figure BDA0001453322020000093
for rubbingThe first flapping structure: target velocity v at radar R1Left side of the radial velocity of the detected target. The target speed and the angle beta are obtained from the formula (27)12. Then from the geometric relationship:
Figure BDA0001453322020000101
Figure BDA0001453322020000102
for topology two: target velocity v at two radars R1、R2Radial velocity angle of the detected target. The target speed and the included angle beta are obtained from the formula (28)12. Then from the geometric relationship:
Figure BDA0001453322020000103
Figure BDA0001453322020000104
for topology three: target velocity v at radar R2To the right of the radial velocity of the detected target. The target speed and the angle beta are obtained from the formula (29)12. Then from the geometric relationship:
Figure BDA0001453322020000105
Figure BDA0001453322020000106
TH and R can be obtained by solving the three models3H, then from the geometric relationship:
Figure BDA0001453322020000107
vcosβ3=v3 (38)
thereby passing through v3The target speed v can be derived. By applying radar R1,R2If the radar is used for identifying the target intention, the data of the radar is corrected, otherwise, the data of the radar is removed.
Step two: the data is normalized by the normalization method,
the number of intentions the target has is denoted as L, and the ith intention is denoted as ylL-1, 2, …, L, and putting all intents in the same space results in the target tactical intention space Y-Y (Y)1,y2,…,yL). If the number of the detected events in a period of time is E and the number of the attributes of the target is N, the nth attribute value of the E event is recorded as xenE is 1,2, … E, N is 1,2, … N. Combining values of all attributes of the e-th event to form a target feature vector Xe=(xe1,xe2,…xen,…,xeN). Target has intention ylThe feature vector X of the time is called the standard feature vector
Figure BDA0001453322020000111
Standard feature vector
Figure BDA0001453322020000112
The form is as follows:
Figure BDA0001453322020000113
standard value
Figure BDA0001453322020000114
Component (b) of
Figure BDA0001453322020000115
Representing an attribute n corresponding to an intention ylThe standard value of time.
Adopting a Min-Max standardization method to carry out the value x of the nth attribute of the e event of the original dataenMapping to a uniform range. Let minnIs the minimum value of the attribute n, maxnIs the maximum value of the attribute n, and all the intervals after the attribute conversion are unified as [0,1 ]]The value obtained by conversion is recorded as
Figure BDA0001453322020000116
The formula is as follows:
Figure BDA0001453322020000117
the standard value of the nth attribute of the ith intention is compared in a similar way
Figure BDA0001453322020000118
Is converted into
Figure BDA0001453322020000119
The formula is as follows:
Figure BDA00014533220200001110
step three: the degree to which the target state is similar to each intent,
using equations (40) and (41) to convert the target current feature vector X into the target current feature vector XeStandard vector corresponding to target intention space
Figure BDA00014533220200001111
The data is processed in a standardized way to obtain
Figure BDA00014533220200001112
Then, the similarity between the target feature vector and the standard vector is calculated by using the formula (42). The calculation formula is as follows:
Figure BDA00014533220200001113
when the standard value is reached
Figure BDA00014533220200001114
When a value of 0 or 1 is taken, e.g.
Figure BDA00014533220200001115
Figure BDA00014533220200001116
Otherwise
Figure BDA00014533220200001117
When the standard value is reached
Figure BDA00014533220200001118
When the values are discrete and not 0 or 1,
Figure BDA00014533220200001119
when the standard value is reached
Figure BDA00014533220200001120
Values are continuous and interval is [ a, b]When the temperature of the water is higher than the set temperature,
Figure BDA00014533220200001121
Figure BDA00014533220200001122
wherein N is the number of attributes.
Will be provided with
Figure BDA0001453322020000121
Is marked as helThat is, the similarity between the e-th event and the l-th intention is represented, so as to obtain a similarity matrix H between each event and each intention:
Figure BDA0001453322020000122
step five: the probability density distribution function is a function of,
normalizing the similarity matrix H according to rows to obtain a basic probability distribution function m of each evidencee(·):
Figure BDA0001453322020000123
Wherein
Figure BDA0001453322020000124
(i.e., the row and column of the similarity matrix H of equation (44)).
Step six: the conflict is processed, and the processing is carried out,
two evidences E under the recognition framework Θ1,E2The basic probability distribution function is m1(. and m)2(. o) each jiao Yuan is AiAnd BjThen evidence E1、E2The similarity coefficient between can be expressed as:
Figure BDA0001453322020000125
and (4) calculating a similarity coefficient between any two evidences to obtain a similarity matrix S, wherein the number of the common evidences is E. Adding each row of the similar matrix S to obtain evidence E of each evidence pairiThen normalizing the support degree of the evidence to obtain the credibility Cr (m) of the evidencei). Take it as evidence EiThe weighted basic probability distribution of each evidence is obtained.
The basic probability distribution of evidence is averaged:
Figure BDA0001453322020000126
step seven: D-S evidence synthesis and intention judgment,
the basic probability distribution of the evidence obtained by the formula (47) is synthesized by the following formula.
Figure BDA0001453322020000131
Wherein
Figure BDA0001453322020000132
The resultant m (C) is reacted with m (A) obtained by the formula (47)i) Continuing the synthesis of the formula (48), and continuing the synthesis with m (A) obtained by the formula (47)i) The synthesis of equation (48) is performed, looping until the synthesis is completed E-2 times. And obtaining final probability distribution of each intention, wherein the intention corresponding to the maximum probability value is the target intention.
The effect of the invention is further illustrated by the following simulation comparative tests:
simulation scene: two enemy planes of a certain type are arranged to fly to the enemy, wherein one plane is responsible for reconnaissance of the enemy and transmits the reconnaissance result to the other plane which is used as an attack task. The plane that serves as the attack task looks like attacking one's own side first and then attacks. Let T0, T1, T2, and T3 denote different times, and F1 and F2 assume numbers of two enemy airplanes. Suppose that there are 10 radars observing the target at the same time, and 10 radars are located on the same line. Assuming only radar R1The intention of the target is identified, and the rest radars are used for assisting in detecting the radar R1And whether the observed data are accurate or not. Now consider radar R1The speed of observation is problematic and all others are correct. The 10 radar positions are respectively: (100,300,0), (100,400,0), (100,200,0), (100, 0), (100,900,0), (100,700,0), (100,500,0), (100,1000,0), (100,2000,0), (100,3000, 0). Radar R1Observed at T0-T3The specific process of the two airplanes at the moment is shown in table 1. 10 radars at T0-T3The radial velocities of the targets observed at the respective times are shown in table 2.
According to the battlefield environment and the target characteristics, whether a target attack radar is started or not (1 represents starting and 0 represents closing), the distance, the height and the speed are used as target intention identification elements, and an expert sets a target intention space and the standard thereof. Assuming that the target has the intentions of reconnaissance, attack, impersonation, shielding and the like, the present invention assumes that the standards set by experts are as shown in table 3.
The target intention is directly identified without preprocessing the data, the simulation result of the target intention identification based on the D-S evidence theory is shown in FIG. 7, and the simulation result of the target intention identification based on the improved D-S evidence theory is shown in FIG. 8.
The data are preprocessed and then subjected to target intention recognition, and a simulation result of the target intention recognition based on the D-S evidence theory is shown in FIG. 9. Simulation results of target intent recognition based on the improved D-S evidence theory are shown in fig. 10.
TABLE 1 details of two enemy planes
Figure BDA0001453322020000133
Figure BDA0001453322020000141
Speed of radar in table 210 part at different time
Figure BDA0001453322020000142
TABLE 3 intention and standard values thereof
Figure BDA0001453322020000143
By comparing fig. 7 and 8, it can be seen that the wrong radar R is used1The reason why the simulation of the speed of (3) has little influence on the result of the intention identification of the aircraft F1 is probably that the intention of F1 is reconnaissance, the weight of the altitude attribute is large, and the influence of the speed on the altitude attribute is small. However, it can be seen that the result of the identification of the intention of the airplane F2/F3 is greatly influenced, and the D-S evidence theory is directly usedBoth the theory of synthesis and the synthesis based on the improved D-S evidence theory fail to correctly identify the target intention.
As can be seen from fig. 9(a), the probability of reconnaissance intention of the F1 aircraft gradually increases with the acquisition of new evidence from time T0. Comparing fig. 9(a) and fig. 10(a) can see that the probability trends of the two graphs are basically consistent, but fig. 10(a) has a faster convergence rate; as can be seen from fig. 10(b), the aircraft was initially assumed to be a phantom attack in F2, and the assumed attack intention gradually decreased and the attack intention gradually increased with the lapse of time. On the other hand, although fig. 9(b) shows that the target intention is assumed to be attack at the beginning and the intention probability is gradually increased, at time T2, the assumed attack intention is decreased and the intention of shielding and attack is increased, and at time T3, the intention probabilities of shielding and attack are almost the same, and the target intention cannot be accurately determined.
Comparing fig. 7, fig. 8, fig. 9, and fig. 10, it can be found that performing the target intention recognition after the data preprocessing is more effective in recognizing the target intention than directly performing the target intention recognition without the data preprocessing, that the conflicting evidence synthesizing method based on the similarity coefficient between the evidences is more effective than directly using the D-S synthesizing method, that the convergence speed is faster, that the problem of evidence conflict can be effectively solved, and that the target intention can be more easily recognized. When the evidences do not conflict, the synthesis effect is better, so that the method of directly performing conflict processing on the evidences without judgment and then performing D-S synthesis is effective.
According to the specific implementation mode of the invention, the target intention can be well identified.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (1)

1. A target intention identification method based on an improved D-S evidence theory comprises the following steps:
step 1: the pre-processing of the data is carried out,
the position of the target T is marked as (x, y, z), wherein x, y and z are respectively an x-axis coordinate, a y-axis coordinate and a z-axis coordinate of the target; three radars RiAnd the position of i-1, 2,3 is marked as (x)i,yi,zi) I is 1,2,3, wherein xi、yi、ziI is 1,2 and 3 are x-axis coordinates, y-axis coordinates and z-axis coordinates of the three radars respectively; the distances between the three radars are:
Figure FDA0002937153490000011
the distance from the target to the three radars is recorded as riI is 1,2,3, and the radial velocity of the target detected by each radar is respectively recorded as viI is 1,2, 3; the geometric relationship can obtain the included angle theta formed by any two radars and the targetij,i、j=1,2,3,i≠j:
Figure FDA0002937153490000012
For the condition that all radars are positioned on the same plane and the speed direction of the target is coplanar with the plane where the radars are positioned; the target speed is recorded as v, and two radars R are selected1,R2Target speed and radar R1,R2The angle of the radial velocity of the detected object is recorded as beta12Coexisting in three topologies;
for topology one: target velocity v at radar R1Left of the radial velocity of the detected target; then from the geometric relationship:
Figure FDA0002937153490000013
for topology two: the target speed v is between the radial speed included angles of the targets detected by the two radars; then from the geometric relationship:
Figure FDA0002937153490000014
for topology three: target velocity v at radar R2Right of the detected radial velocity of the target; then from the geometric relationship:
Figure FDA0002937153490000015
then when the data from any two radars is substituted into equations (3), (4), (5), the result is the positive one β1Is the correct beta1Accordingly, is positive beta2Is the correct beta2(ii) a In the networking radar, two radars are randomly selected to deduce the target speed by utilizing the radial speed of the target detected by the radars, different radars are selected to carry out mutual verification because the target speed is a fixed value, and the radar which is different from the target speed deduced by combining all the radars and any two other radars is found out, namely the radar with abnormality exists;
for the condition that the radial speed of a part of the targets detected by all the radars is coplanar with the target speed, and the other part of the targets is not in the same plane with the target speed; the target speed is denoted v and is related to the radar R1,R2Coplanar, target speed and radar R1,R2The angle of the measured radial velocity of the target is recorded as beta12Another arbitrary radar R3And the included angle between the radial speed of the detected target and the speed of the target is recorded as beta3Coexisting in three topologies; the extension line of the target speed is crossed with the radar R1,R2Line segment R formed by connecting lines1R2At H, the connection of the target T to HLine TH, Radar R1The line connecting H is denoted as R1H, radar R3The line connecting H is denoted as R3H, target T and radar R1,R2The included angle formed is marked as ≈ TR1R2=γ1The included angle formed by the three radars is recorded as ≈ R2R1R3=α1(ii) a Then from the geometric relationship:
Figure FDA0002937153490000021
for topology one: target velocity v at radar R1Left of the radial velocity of the detected target; obtaining the target speed and the included angle beta from the formula (3)12(ii) a Then from the geometric relationship:
Figure FDA0002937153490000022
Figure FDA0002937153490000023
for topology two: target velocity v at two radars R1、R2The included angle of the radial speed of the detected target; obtaining the target speed and the included angle beta from the formula (4)12(ii) a Then from the geometric relationship:
Figure FDA0002937153490000031
Figure FDA0002937153490000032
for topology three: target velocity v at radar R2Right of the detected radial velocity of the target; obtaining the target speed from equation (5)And angle of inclusion beta12(ii) a Then from the geometric relationship:
Figure FDA0002937153490000033
Figure FDA0002937153490000034
obtaining TH and R by jointly solving the three models3H, the values are obtained from the geometric relationship:
Figure FDA0002937153490000035
vcosβ3=v3 (14)
thereby passing through v3A target speed v can be deduced; by applying radar R1,R2Combining with different radars, wherein each combination can derive a target speed, finding out radars with different target speeds obtained by other combinations in the derived target speeds through comparison, namely, radars with problems, if the radars are radars used for identifying the target intention, correcting the data of the radars, and otherwise, removing the data of the radars;
step 2: the data is normalized by the normalization method,
the number of intentions the target has is denoted as L, and the ith intention is denoted as ylL-1, 2, …, L, and putting all intents in the same space results in the target tactical intention space Y-Y (Y)1,y2,…,yL) (ii) a If the number of the detected events in a period of time is E and the number of the attributes of the target is N, the nth attribute value of the E event is recorded as xen1,2, … E, N1, 2, … N; combining values of all attributes of the e-th event to form a target feature vector Xe=(xe1,xe2,…xen,…,xeN) (ii) a Target has intention ylOf the hourThe feature vector X is called the standard feature vector
Figure FDA0002937153490000038
Standard feature vector
Figure FDA0002937153490000037
The form is as follows:
Figure FDA0002937153490000036
standard value
Figure FDA0002937153490000041
Component (b) of
Figure FDA0002937153490000042
Representing an attribute n corresponding to an intention ylA standard value of time;
adopting a Min-Max standardization method to carry out the value x of the nth attribute of the e event of the original dataenMapping to a uniform range; let minnIs the minimum value of the attribute n, maxnIs the maximum value of the attribute n, and all the intervals after the attribute conversion are unified as [0,1 ]]The value obtained by conversion is recorded as
Figure FDA0002937153490000043
The formula is as follows:
Figure FDA0002937153490000044
the standard value of the nth attribute of the ith intention is compared in a similar way
Figure FDA0002937153490000045
Is converted into
Figure FDA0002937153490000046
Formula asThe following:
Figure FDA0002937153490000047
and step 3: the degree to which the target state is similar to each intent,
using the formulas (16) and (17) to convert the target current feature vector XeStandard vector corresponding to target intention space
Figure FDA00029371534900000420
The data is processed in a standardized way to obtain
Figure FDA0002937153490000048
Then, calculating the similarity between the target characteristic vector and the standard vector by using the formula (18); the calculation formula is as follows:
Figure FDA0002937153490000049
when the standard value is reached
Figure FDA00029371534900000410
When the value is 0 or 1, if
Figure FDA00029371534900000411
Then
Figure FDA00029371534900000412
Otherwise
Figure FDA00029371534900000413
When the standard value is reached
Figure FDA00029371534900000414
When the values are discrete and not 0 and 1,
Figure FDA00029371534900000415
when the standard value is reached
Figure FDA00029371534900000416
Values are continuous and interval is [ a, b]When the temperature of the water is higher than the set temperature,
Figure FDA00029371534900000417
Figure FDA00029371534900000418
wherein N is the number of attributes;
will be provided with
Figure FDA00029371534900000419
Is marked as helThat is, the similarity between the e-th event and the l-th intention is represented, so as to obtain a similarity matrix H between each event and each intention:
Figure FDA0002937153490000051
and 4, step 4: the probability density distribution function is a function of,
normalizing the similarity matrix H according to rows to obtain a basic probability distribution function m of each evidencee(·):
Figure FDA0002937153490000052
Wherein
Figure FDA0002937153490000053
I.e., the row and column of the similarity matrix H of equation (20);
and 5: the conflict is processed, and the processing is carried out,
two evidences E under the recognition framework Θ1,E2Basic probability distribution function ofAre respectively m1(. and m)2(. o) each jiao Yuan is AiAnd BjThen evidence E1、E2The similarity coefficient between can be expressed as:
Figure FDA0002937153490000054
setting the number of the common evidences as E, and calculating a similarity coefficient between any two evidences to obtain a similarity matrix S; adding each row of the similar matrix S to obtain evidence E of each evidence pairiThen normalizing the support degree of the evidence to obtain the credibility Cr (m) of the evidencei) (ii) a Take it as evidence EiObtaining the weighted basic probability distribution of each evidence;
the basic probability distribution of evidence is averaged:
Figure FDA0002937153490000055
step 6: D-S evidence synthesis and intention judgment,
synthesizing the basic probability distribution of the evidence obtained by the formula (23) by the following formula;
Figure FDA0002937153490000061
wherein
Figure FDA0002937153490000062
The resultant m (C) is reacted with m (A) obtained by the formula (23)i) Continuing the synthesis of the formula (24), and continuing the synthesis with m (A) obtained by the formula (23)i) Performing the synthesis of formula (24) in cycles until completion of the E-2 syntheses; and obtaining final probability distribution of each intention, wherein the intention corresponding to the maximum probability value is the target intention.
CN201711053015.4A 2017-11-01 2017-11-01 Target intention identification method based on improved D-S evidence theory Active CN107871138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711053015.4A CN107871138B (en) 2017-11-01 2017-11-01 Target intention identification method based on improved D-S evidence theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711053015.4A CN107871138B (en) 2017-11-01 2017-11-01 Target intention identification method based on improved D-S evidence theory

Publications (2)

Publication Number Publication Date
CN107871138A CN107871138A (en) 2018-04-03
CN107871138B true CN107871138B (en) 2021-04-30

Family

ID=61752461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711053015.4A Active CN107871138B (en) 2017-11-01 2017-11-01 Target intention identification method based on improved D-S evidence theory

Country Status (1)

Country Link
CN (1) CN107871138B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108761263B (en) * 2018-05-24 2021-03-12 中电华创(苏州)电力技术研究有限公司 Fault diagnosis system based on evidence theory
CN110334340B (en) * 2019-05-06 2021-08-03 北京泰迪熊移动科技有限公司 Semantic analysis method and device based on rule fusion and readable storage medium
CN110940959B (en) * 2019-12-13 2022-05-24 中国电子科技集团公司第五十四研究所 Man-vehicle classification and identification method for low-resolution radar ground target
CN111563596B (en) * 2020-04-22 2022-06-03 西北工业大学 Uncertain information reasoning target identification method based on evidence network
CN113177615B (en) * 2021-06-01 2024-01-19 西北工业大学 Evidence forest-based information uncertainty condition target intention recognition method
CN113608211B (en) * 2021-08-09 2023-09-05 电子科技大学 Radar networking mode identification method based on communication traffic information assistance
CN114034338B (en) * 2021-10-29 2023-08-11 国网安徽省电力有限公司电力科学研究院 Switch cabinet multi-source parameter monitoring method based on improved D-S evidence theory
CN116304887B (en) * 2023-05-16 2024-02-27 中国电子科技集团公司第五十四研究所 Target identification method based on evidence theory

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2625543A1 (en) * 2010-10-08 2013-08-14 The University of Massachusetts System and method for generating derived products in a radar network
CN104777469A (en) * 2015-04-21 2015-07-15 电子科技大学 Radar node selection method based on measurement error covariance matrix norm
CN106228132A (en) * 2016-07-21 2016-12-14 中国电子科技集团公司第三研究所 Target identification method and Target Identification Unit

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2625543A1 (en) * 2010-10-08 2013-08-14 The University of Massachusetts System and method for generating derived products in a radar network
CN104777469A (en) * 2015-04-21 2015-07-15 电子科技大学 Radar node selection method based on measurement error covariance matrix norm
CN106228132A (en) * 2016-07-21 2016-12-14 中国电子科技集团公司第三研究所 Target identification method and Target Identification Unit

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Radar network topology optimization for joint target position and velocity estimation;Inna Ivashko etal.;《Signal Processing》;20160709;全文 *
Research on Recognition Technique of Target Tactical Intentions in Sea Battlefield;Sun Yuelin etal.;《2012 Fifth International Symposium on Computational Intelligence and Design》;20121231;全文 *
基于D-S证据理论的海战场目标战术意图识别技术研究;孙越林等;《舰船电子工程》;20121231;第32卷(第5期);全文 *
数据融合在组网雷达作战效能发挥中的应用;张继刚等;《火力与指挥控制》;20090930;第34卷(第9期);全文 *
神经网络和证据理论融合的管道泄漏诊断方法;陈斌等;《北京邮电大学学报》;20090228;第32卷(第1期);全文 *

Also Published As

Publication number Publication date
CN107871138A (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN107871138B (en) Target intention identification method based on improved D-S evidence theory
CN106873628B (en) A kind of collaboration paths planning method of multiple no-manned plane tracking multimachine moving-target
CN104077601B (en) A kind of method that based target integrated identification is carried out using different types of information
CN112598046B (en) Target tactical intent recognition method in multi-machine cooperative air combat
CN111783020B (en) Battlefield entity target grouping method and system with multidimensional features
CN113159266B (en) Air combat maneuver decision method based on sparrow searching neural network
CN104199788A (en) Multi-target air-to-ground semi-supervised machine adaption independent decision-making real-time attack method
Chen et al. Online intention recognition with incomplete information based on a weighted contrastive predictive coding model in wargame
CN112800082B (en) Air target identification method based on confidence rule base inference
CN113065094A (en) Situation assessment method and system based on accumulated foreground value and three-branch decision
CN109977763B (en) Aerial small target identification method based on improved evidence trust
CN115993075B (en) Missile control surface fault detection method based on SSLLE and self-adaptive threshold
CN116520311A (en) GLMB-based adaptive track initiation method
CN115019238B (en) Group target dynamic behavior identification method based on hidden Markov model
Wang et al. An intelligent algorithm for infrared target recognition
CN115661576A (en) Method for identifying airplane group intention under sample imbalance
CN111563596B (en) Uncertain information reasoning target identification method based on evidence network
CN104850856A (en) Multi-extension target tracking method for affinity propagation cluster observation
CN115935773A (en) Layered identification method for target tactical intentions in air combat simulation environment
Xu et al. A novel DBN-based intention inference algorithm for warship air combat
Su et al. Target Intention Recognition Model Based on SMOTE-AdaBoost under Unbalanced Samples
Liu et al. Techniques for Aerial Target Recognition Based on Belief Rule Base and Evidential Reasoning
CN105488458B (en) A kind of Ship Target character representation method based on image space structure distribution
Gao et al. Target Combat Intention Recognition Based on Improved Fisher Information Matrix
CN111435256A (en) Automatic terrain evasion method for aircraft based on grid map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant