CN113177615B - Evidence forest-based information uncertainty condition target intention recognition method - Google Patents

Evidence forest-based information uncertainty condition target intention recognition method Download PDF

Info

Publication number
CN113177615B
CN113177615B CN202110605856.1A CN202110605856A CN113177615B CN 113177615 B CN113177615 B CN 113177615B CN 202110605856 A CN202110605856 A CN 202110605856A CN 113177615 B CN113177615 B CN 113177615B
Authority
CN
China
Prior art keywords
formula
attribute
evidence
intention
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110605856.1A
Other languages
Chinese (zh)
Other versions
CN113177615A (en
Inventor
蒋雯
李新宇
邓鑫洋
张瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110605856.1A priority Critical patent/CN113177615B/en
Publication of CN113177615A publication Critical patent/CN113177615A/en
Application granted granted Critical
Publication of CN113177615B publication Critical patent/CN113177615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an information uncertainty condition target intention recognition method based on an evidence forest, which comprises the following steps: step one, data preprocessing; step two, constructing a reliability decision tree; thirdly, constructing an evidence forest based on a credibility decision tree; and fourthly, performing intention recognition by using the constructed evidence forest. The method provides a calculation method of uncertainty information entropy based on evidence similarity, simultaneously fully considers the influence of the weight of each credibility decision tree in an evidence forest, provides a weight calculation method based on a likelihood theory, combines the results of each credibility decision tree through weighted average, and finally obtains a final result through evidence self-combination. The method and the device can well solve the problem of target intention recognition with uncertainty information on the premise of ensuring the intention recognition accuracy.

Description

Evidence forest-based information uncertainty condition target intention recognition method
Technical Field
The invention belongs to the technical field of target intention recognition, and particularly relates to an information uncertainty condition target intention recognition method based on an evidence forest.
Background
Target intent refers to the basic assumption and preset behavior that the target wishes to achieve a certain goal, where preset behavior refers to the outcome that the target plans or is scheduled to achieve. The battlefield target tactical intention recognition is a process of combining target information obtained by various information sources under a special battlefield environment and then effectively analyzing battlefield situations and enemy tactical intention. In the process of target intention recognition, information obtained by an information source has uncertainty to a great extent, so how to deal with the uncertainty information is a problem currently faced.
Among existing machine learning algorithms, the decision tree method is one of the most commonly used methods. The feature of the decision tree is that it can break down complex decision problems into several simple problems. While decision trees are popular and effective, they are not suitable for dealing with problems with uncertainty information. Uncertainties may affect the classification results and even make erroneous decisions. The belief function theory provides a convenient framework for dealing with uncertainty in decision tree techniques. The credibility decision tree combines the advantages of decision tree technology and belief function theory to solve the classification problem of cognitive uncertainty.
Random forests are a combination of decision trees using bagging algorithms and random subspace techniques. Conventional random forest algorithms use voting or averaging methods to obtain a final class of results for each decision tree. The accuracy is limited when faced with uncertain classification problems. In order to better deal with uncertainty problems, an evidence forest algorithm is proposed that can handle uncertainty information and is used for uncertainty target intent recognition.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a method for identifying the target intention based on the uncertain information conditions of the evidence forest. By establishing the evidence forest model, the data to be detected with the uncertainty information is input into the evidence forest model to obtain the intention recognition result of the target, the problem of target intention recognition with the uncertainty information is well solved, and the accuracy of intention recognition is fully improved.
In order to solve the technical problems, the invention adopts the following technical scheme: the evidence forest-based information uncertainty condition target intention recognition method is characterized by comprising the following steps of:
step one, data preprocessing:
step 101: inputting n target training samples comprising attribute feature vectors X= [ X ] 1 ,X 2 ,…,X v ]Wherein v is the number of attributes, X i I=1, 2, …, v represents the measured value of each attribute feature, and the recognition frame corresponding to the intention Y of each sample is Θ Y ={Y 1 ,Y 2 ,…,Y w W represents the number of intents in the recognition frame;
step 102: establishing a Gaussian blur number f [ C (mu, delta) ] for each attribute feature]Mu and delta represent the mean and variance of the Gaussian blur number C, and the frame is identifiedk represents the number of categories in each attribute identification frame, and each attribute feature measurement value X is calculated i Is a Gaussian blur membership value f (X i ) According to the formula->Calculate a single subset m' (F) j ) And double subset m' (F) j ,F j+1 ) Normalizing to obtain basic probability distribution of attributeFinally, n sample attribute measurement values (X 1 ,X 2 ,…,X v ) n Converted into the form of a basic probability distribution>
Step 103: according to the formulaWill->Conversion into probability distribution->Converting all attribute characteristics of n samples to obtain
Step 104: take the intention as Y p Different properties of the samples of (a)Comprises P (F) j ) Product combination is performedThe identification frame elements are combined intoThe combined intent can be expressed as +.>After combining all samples, the attribute identification framework element +.>Combining identical samples +.>Adding and combining to obtain +.>Will->Normalized to->
Step 105: according to the formulaConstructing multi-subset beliefs for intent labels, z represents (Y 1 ,Y 2 ,…,Y w1 ) The multiple subsets generated above are compared with the single subset [ P' (Y) 1 ),P'(Y 2 ),…,P'(Y w )]Composition [ P' (Y) 1 ),P'(Y 2 ),…,P'(Y 1 ,Y 2 ,…,Y w )]Normalizing to obtain the basic probability distribution m of the intention Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )];
Step 106: attribute feature recognition framework element combinationAnd probability distribution m of intent Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Data for n1 training models, n1<n;
Step two, constructing a credibility decision tree:
step 201: according to the formulaCalculating a probability distribution of intent of each training sample, where |Y| represents the number of elements contained in Y;
step 202: according to the formulaCalculating the mean confidence level of the intention of all training samples and then according to the formula +.>Calculating the information entropy of the intention average credibility;
step 203: according to the formulaCalculating information entropy of attribute features, wherein ∈>Representing attribute X i At F j The number of samples;
step 204: calculating an attribute X i Take value to fuzzy class F j The underlying intended base probability distribution of each sample [ m (Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Couple [ m (Y) 1 )=1],[m(Y 2 )=1],…,[m(Y w )=1]Evidence distance (d) 1 ,d 2 ,…,d w ) The evidence distance calculation formula is that And->Representing two basic probability distributions m 1 And m 2 Vectors of constitution>Is 2 N ×2 N Matrix of->Corresponding to m 1 The column label corresponds to m 2 ,/>Each element of +.>A and B are each m 1 And m 2 The focal elements in (1-d) are used to calculate the similarity 1 ),(1-d 2 ),…,(1-d w )]Calculate attribute X i Take value to fuzzy class F j Mean and variance of similarity of all samples [ (mu) 11 ),(μ 22 ),…,(μ ww )];
Step 205: according to the formulaCalculating the mean value after variance weighting, and normalizing to obtain (U 1 ,U 2 ,…,U w ) According to the formula->Calculating an attribute X i Information entropy of intention;
step 206: according to the formulaCalculating the information gain ratio of the ith attribute, calculating the information gain ratio of all v attributes, and selecting the attribute with the largest information gain ratio as the splitting attribute of the credibility decision tree;
step 207: repeating steps 201-206 each time a split attribute is selected until only one training sample or a plurality of samples of the same class remain, as a basic probability distribution of the intention of the leaf nodes to store the samples;
thirdly, constructing an evidence forest based on a credibility decision tree:
step 301: by adopting a repeated sampling technology, each tree of the training evidence forest is a different training sample, so that overfitting is avoided;
step 302: for the training samples obtained in the step 301, selecting m1 attributes from the attributes with the number v, wherein the m1 attributes are randomly selected without replacement, and the selected attributes are used as training samples of a training reliability decision tree;
step 303: setting the number N of credibility decision trees in the evidence forest, and establishing a plurality of credibility decision trees to form the evidence forest according to the step two;
step 304: calculating the weight of each tree in the evidence forest, wherein each credibility decision tree can obtain the recognition result ms (l) = [ m ] of the training sample s '(Y 1 ),m s '(Y 2 ),…,m s '(Y 1 ,Y 2 ,…,Y w )] l L=1, 2, …, N, and the recognition result ms (l) and the sample true intention [ m (Y) 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Evidence distance (D) 1 ,D 2 ,…,D N ) Calculate the similarity [ (1-D) 1 ),(1-D 2 ),…,(1-D N )]Calculating the average similarity of each tree to all samples
Step 305: according to the formulaNormalized average similarity, according to the formulaCalculating a likelihood measure according to the formula +.>Calculate the phase necessity measure, A 1 ∈[1,2,…,(1,2),(1,3),…,(1,2,…,N)]Representing a set of confidence decision trees;
step 306: solving the weight W of each credibility decision number l From the likelihood measure and the necessity measure obtained in step 305, a weight constraint q (a 2 )≤[W(o 1 )+W(o 2 )+…+W(o N )]≤p(A 2 ),W(o 1 )+W(o 2 )+…+W(o N ) =1, wherein a 2 =[(o 1 ),(o 2 ),…,(o N )]Fusing the identification results of all treesCalculating the probability distribution of intent among Ms BetP [ Ms (j) according to the formula in step 201]According to the formula->Calculating the information entropy of Ms, solving for the optimal weight (W) that minimizes Info (Ms) under constraint conditions 1 ,W 2 ,…W N ) min[Info(Ms)] As a weight for each tree;
fourthly, performing intention recognition by using the constructed evidence forest:
step 401: processing the original sample to be tested in step 102 to obtain a processed sampleWherein each attribute->Is a basic probability distribution;
step 402: expanding an identification framework of attributes intoExpanding a sample to be tested into +.>Individual joint focal element distributionBasic probability distribution value of joint focal element +.>Each combined focal element->Traversing under the generated credibility decision tree to obtain probability distribution m of intention of corresponding leaf node Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Let->|m Y The number of intent probability distributions under the leaf node is represented by I;
step 403: according to the formula Bel [ m ] T ](Y p )=∑Bel[m χ Y ](Y p )*m(χ),χ∈Θ F Calculating a sample m to be measured T Regarding intention Y p Confidence function values of (a), i.e. m in leaf nodes corresponding to the confidence decision tree for each joint focal element χ Y To intention Y p Confidence function value Bel [ m ] χ Y ](Y p ) The product of the basic probability distribution value m (χ) of each joint focal element and the multiple joint focal elements are summed to obtain a sample m to be measured T To intention Y p If one joint focal element corresponds to a plurality of leaf nodes in the confidence decision tree, the plurality of leaf nodes are opposite to Y p Confidence function value Bel [ m ] χ Y ](Y p )=Bel{m[χ 1 ] Y }(Y p )∨Bel{m[χ 2 ] Y }(Y p ) Wherein V.sub.represents the formulaConfidence function calculation formula is +.>
Step 404: obtaining a sample m to be tested according to step 403 T Confidence function Bel [ m ] for intent T ](Y p ),Y p ∈[Y 1 ,Y 2 ,…,(Y 1 ,Y 2 ),…,(Y 1 ,Y 2 ,…,Y w )]According to the formula
Converting the confidence function into a basic probability distribution t (Y) l =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]L=1, 2, …, N, according to the formulaFusing the identification result of each tree;
step 405: according to the formulaSelf-combining the fusion results N-1 times, wherein +.>Representing evidence combination formula +.>Wherein->
Compared with the prior art, the invention has the following advantages:
1. the invention can solve the problem of target intention recognition with uncertainty information.
2. The method for calculating the information entropy of the uncertain information based on the evidence similarity can solve the problem of calculating the information entropy of the uncertain information.
3. The invention provides a reliability decision tree weight calculating method based on a likelihood theory, which is used for more reasonably calculating the weight of each tree.
In summary, the method and the device for identifying the intention of the target well solve the problem of identifying the intention of the target with uncertainty information by establishing the evidence forest model and inputting the data to be detected with uncertainty information into the evidence forest model to obtain the intention identifying result of the target, and fully improve the accuracy of the intention identification.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a block flow diagram of the present invention
FIG. 2 is a Gaussian blur number graph of velocity properties
FIG. 3 is a schematic diagram of a certainty decision tree in an evidence forest
Table 1 shows probability distribution data of a training data with all the attributes converted
Table 2 shows the data of fuzzy classes with different attributes
Table 3 shows the intent of all samples with the same attribute fuzzy class combination after addition and combination.
Detailed Description
The process of the present invention is described in further detail below with reference to examples.
As shown in fig. 1, the present invention includes the steps of:
step one, data preprocessing:
step 101: inputting n target training samples comprising attribute feature vectors X= [ X ] 1 ,X 2 ,…,X v ]Wherein v is the number of attributes, X i I=1, 2, …, v represents the measured value of each attribute feature, and the recognition frame corresponding to the intention Y of each sample is Θ Y ={Y 1 ,Y 2 ,…,Y w W represents the number of intents in the recognition frame;
in actual use, the attribute features of the target include { velocity X ] 1 Distance X 2 Height X 3 Heading angle X 4 Category package of intentInclude { attack Y 1 Reconnaissance Y 2 Cruising Y 3 Withdrawal Y 4 Other Y 5 };
Step 102: establishing a Gaussian blur number f [ C (mu, delta) ] for each attribute feature]Mu and delta represent the mean and variance of the Gaussian blur number C, and the frame is identifiedk represents the number of categories in each attribute identification frame, and each attribute feature measurement value X is calculated i Is a Gaussian blur membership value f (X i ) According to the formula->Calculate a single subset m' (F) j ) And double subset m' (F) j ,F j+1 ) Normalizing to obtain basic probability distribution of attributeFinally, n sample attribute measurement values (X 1 ,X 2 ,…,X v ) n Converted into the form of a basic probability distribution>
In actual use, three Gaussian blur numbers are established for each attributeTaking velocity as an example, a Gaussian membership function is established as shown in FIG. 2, each Gaussian blur number may represent a fuzzy semantic, F 1 The curve shows a slow speed, F 2 In the representation speed, F 3 The representation speed is high. The speed of the target is set to be 280km/h, and the basic probability distribution of the obtained speed can be calculated according to the formula Other attributes are the same;
step 103: according to the formulaWill->Conversion into probability distribution->Converting all attribute characteristics of n samples to obtain
Taking the basic probability distribution after the velocity conversion in step 102 as an example, the probability distribution { P (F) 1 )=0,P(F 2 )=0.069,P(F 3 )=0.931};
Step 104: take the intention as Y p Different properties of the samples of (a)Comprises P (F) j ) Product combination is performedThe identification frame elements are combined intoThe combined intent can be expressed as +.>After combining all samples, the attribute identification framework element +.>Combining identical samples +.>Adding and combining to obtain +.>Will->Normalized to->
Table 1 shows probability distribution data of a piece of training data after all attributes are converted, and the combined data is shown in table 2 according to the formula;
step 105: according to the formulaConstructing multi-subset beliefs for intent labels, z represents (Y 1 ,Y 2 ,…,Y w1 ) The multiple subsets generated above are compared with the single subset [ P' (Y) 1 ),P'(Y 2 ),…,P'(Y w )]Composition [ P' (Y) 1 ),P'(Y 2 ),…,P'(Y 1 ,Y 2 ,…,Y w )]Normalizing to obtain the basic probability distribution m of the intention Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )];
Adding and combining all samples, and obtaining data according to the formula shown in a table 3;
step 106: attribute feature recognition framework element combinationAnd probability distribution m of intent Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Data for n1 training models, n1<n;
Step two, constructing a credibility decision tree:
step 201: according to the formulaCalculating a probability distribution of intent of each training sample, where |Y| represents the number of elements contained in Y;
step 202: according to the formulaCalculating the mean confidence level of the intention of all training samples and then according to the formula +.>Calculating the information entropy of the intention average credibility;
step 203: according to the formulaCalculating information entropy of attribute features, wherein ∈>Representing attribute X i At F j The number of samples;
step 204: calculating an attribute X i Take value to fuzzy class F j The underlying intended base probability distribution of each sample [ m (Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Couple [ m (Y) 1 )=1],[m(Y 2 )=1],…,[m(Y w )=1]Evidence distance (d) 1 ,d 2 ,…,d w ) The evidence distance calculation formula is that And->Representing two basic probability distributions m 1 And m 2 Vectors of constitution>Is 2 N ×2 N Matrix of->Corresponding to m 1 The column label corresponds to m 2 ,/>Each element of +.>A and B are each m 1 And m 2 The focal elements in (1-d) are used to calculate the similarity 1 ),(1-d 2 ),…,(1-d w )]Calculate attribute X i Take value to fuzzy class F j Mean and variance of similarity of all samples [ (mu) 11 ),(μ 22 ),…,(μ ww )];
Step 205: according to the formulaCalculating the mean value after variance weighting, and normalizing to obtain (U 1 ,U 2 ,…,U w ) According to the formula->Calculating an attribute X i Information entropy of intention;
step 206: according to the formulaCalculating the information gain ratio of the ith attribute, calculating the information gain ratio of all v attributes, and selecting the attribute with the largest information gain ratio as the splitting attribute of the credibility decision tree;
step 207: repeating steps 201-206 each time a split attribute is selected until only one training sample or a plurality of samples of the same class remain, as a basic probability distribution of the intention of the leaf nodes to store the samples;
in practice, it is used for promoting tissue regenerationThe resulting confidence decision tree is shown in FIG. 2, where m { I } i I=1, 2, …, n represents the basic probability distribution of the intention of the samples deposited in the leaf nodes;
thirdly, constructing an evidence forest based on a credibility decision tree:
step 301: by adopting a repeated sampling technology, each tree of the training evidence forest is a different training sample, so that overfitting is avoided;
step 302: for the training samples obtained in the step 301, selecting m1 attributes from the attributes with the number v, wherein the m1 attributes are randomly selected without replacement, and the selected attributes are used as training samples of a training reliability decision tree;
step 303: setting the number N of credibility decision trees in the evidence forest, and establishing a plurality of credibility decision trees to form the evidence forest according to the step two;
step 304: calculating the weight of each tree in the evidence forest, wherein each credibility decision tree can obtain the recognition result ms (l) = [ m ] of the training sample s '(Y 1 ),m s '(Y 2 ),…,m s '(Y 1 ,Y 2 ,…,Y w )] l L=1, 2, …, N, and the recognition result ms (l) and the sample true intention [ m (Y) 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,·,Y w )]Evidence distance (D) 1 ,D 2 ,…,D N ) Calculate the similarity [ (1-D) 1 ),(1-D 2 ),…,(1-D N )]Calculating the average similarity of each tree to all samples
Step 305: according to the formulaNormalized average similarity, according to the formulaCalculating a likelihood measure according to the formula +.>Calculate the phase necessity measure, A 1 ∈[1,2,…,(1,2),(1,3),…,(1,2,…,N)]Representing a set of confidence decision trees;
step 306: solving the weight W of each credibility decision number l From the likelihood measure and the necessity measure obtained in step 305, a weight constraint q (a 2 )≤[W(o 1 )+W(o 2 )+…+W(o N )]≤p(A 2 ),W(o 1 )+W(o 2 )+…+W(o N ) =1, wherein a 2 =[(o 1 ),(o 2 ),…,(o N )]Fusing the identification results of all treesCalculating the probability distribution of intent among Ms BetP [ Ms (j) according to the formula in step 201]According to the formula->Calculating the information entropy of Ms, solving for the optimal weight (W) that minimizes Info (Ms) under constraint conditions 1 ,W 2 ,…,W N ) min[Info(Ms)] As a weight for each tree;
in actual use, assuming that n=3, the normalized similarity is { s (1) =1, s (2) =0.3, s (3) =0.6 }, a= [ (1, 3) ], p (a) =1, q (a) =0.7 can be calculated according to the above formula, a constraint condition of 0.7.ltoreq.w (1) +w (3). Ltoreq.1 is obtained, all constraint conditions are calculated by the same method, and the optimal solution is solved according to the optimization objective;
fourthly, performing intention recognition by using the constructed evidence forest:
step 401: processing the original sample to be tested in step 102 to obtain a processed sampleWherein each attribute->Is a basic probability distribution;
step 402: expanding an identification framework of attributes intoExpanding a sample to be tested into +.>Individual combined focal distribution-> Basic probability distribution value of joint focal elementEach combined focal element->Traversing under the generated credibility decision tree to obtain probability distribution m of intention of corresponding leaf node Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Let->m Y The number of intent probability distributions under the leaf node is represented by I;
step 403: according to the formula Bel [ m ] T ](Y p )=∑Bel[m χ Y ](Y p )*m(χ),χ∈Θ F Calculating a sample m to be measured T Regarding intention Y p Confidence function values of (a), i.e. m in leaf nodes corresponding to the confidence decision tree for each joint focal element χ Y To intention Y p Confidence function value Bel [ m ] χ Y ](Y p ) The product of the basic probability distribution value m (χ) of each joint focal element and the multiple joint focal elements are summed to obtain a sample m to be measured T To intention Y p If one joint focal element corresponds to a plurality of leaf nodes in the confidence decision tree, the plurality of leaf nodes are opposite to Y p Confidence function value Bel [ m ] χ Y ](Y p )=Bel{m[χ 1 ] Y }(Y p )∨Bel{m[χ 2 ] Y }(Y p ) Wherein V.sub.represents the formulaConfidence function calculation formula is +.>
Step 404: obtaining a sample m to be tested according to step 403 T Confidence function Bel [ m ] for intent T ](Y p ),Y p ∈[Y 1 ,Y 2 ,…,(Y 1 ,Y 2 ),…,(Y 1 ,Y 2 ,…,Y w )]According to the formula
Converting the confidence function into a basic probability distribution t (Y) l =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]L=1, 2, …, N, according to the formulaFusing the identification result of each tree;
step 405: according to the formulaSelf-combining the fusion result N-1 times, whereinRepresenting evidence combination formula +.>Wherein->
For example a sampleTwo samples of the combined focal element are expanded to be two samples of the combined focal element>Wherein m (χ) 1 )=0.2,m(χ 2 ) =0.8, under the decision tree of fig. 2, the samples of the leaf nodes corresponding to the two joint focal distributions are I 7 And I 4 、I 5 。I 7 ={m(Y 1 )=0,m(Y 2 )=0.4,m(Y 3 )=0.6,m(Y 4 )=0,m(Y 5 )=0}I 4 ={m(Y 1 )=0,m(Y 2 )=0,m(Y 3 )=0.3,m(Y 4 )=0.7,m(Y 5 )=0,}I 5 ={m(Y 1 )=0,m(Y 2 )=0,m(Y 3 )=0,m(Y 4 )=0.7,m(Y 5 ) =0.3, } because χ 2 The corresponding leaf nodes contain two basic probability distributions, thus averaging the basic probability distributions +.>Bel [ m ] is calculated according to the formula T ](Y 1 ) =0.2x0+0.8x0=0, and Bel [ m ] is calculated in the same manner T ](Y 2 ),Bel[m T ](Y 3 ),Bel[m T ](Y 4 ),Bel[m T ](Y 5 ) Then converting into the intended basic probability distribution, and finally calculating the identification result of the evidence forest according to a formula.
The foregoing is merely an embodiment of the present invention, and the present invention is not limited thereto, and any simple modification, variation and equivalent structural changes made to the foregoing embodiment according to the technical matter of the present invention still fall within the scope of the technical solution of the present invention.

Claims (1)

1. The evidence forest-based information uncertainty condition target intention recognition method is characterized by comprising the following steps of:
step one, data preprocessing:
step 101: conveying deviceInto n target training samples, including attribute feature vector x= [ X ] 1 ,X 2 ,…,X v ]Wherein v is the number of attributes, X i I=1, 2, …, v represents the measured value of each attribute feature, and the recognition frame corresponding to the intention Y of each sample is Θ Y ={Y 1 ,Y 2 ,…,Y w W represents the number of intents in the recognition frame;
step 102: establishing a Gaussian blur number f [ C (mu, delta) ] for each attribute feature]Mu and delta represent the mean and variance of the Gaussian blur number C, and the frame is identifiedk represents the number of categories in each attribute identification frame, and each attribute feature measurement value X is calculated i Is a Gaussian blur membership value f (X i ) According to the formula->Calculate a single subset m' (F) j ) And double subset m' (F) j ,F j+1 ) Normalizing to obtain basic probability distribution of attributei=1, 2, …, v, j=1, 2, …, k, n sample property measurements (X 1 ,X 2 ,…,X v ) n Converted into the form of a basic probability distribution>
Step 103: according to the formulaWill->Conversion into probability distribution->Converting all attribute characteristics of n samples to obtain
Step 104: take the intention as Y p Different properties of the samples of (a)Comprises P (F) j ) Product combination is performedThe identification frame elements are combined intoThe combined intent can be expressed as +.>After combining all samples, the attribute identification framework element +.>Combining identical samples +.>Adding and combining to obtain +.>Will->Normalized to->
Step 105: according to the formulaConstructing multi-subset beliefs for intent labels, z represents (Y 1 ,Y 2 ,…,Y w1 ) The multiple subsets generated above are compared with the single subset [ P' (Y) 1 ),P'(Y 2 ),…,P'(Y w )]Composition [ P' (Y) 1 ),P'(Y 2 ),…,P'(Y 1 ,Y 2 ,…,Y w )]Normalizing to obtain the basic probability distribution m of the intention Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )];
Step 106: attribute feature recognition framework element combinationAnd probability distribution m of intent Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Data for n1 training models, n1<n;
Step two, constructing a credibility decision tree:
step 201: according to the formulaCalculating a probability distribution of intent of each training sample, where |Y| represents the number of elements contained in Y;
step 202: according to the formulaCalculating the mean confidence level of the intention of all training samples and then according to the formula +.>Calculating the information entropy of the intention average credibility;
step 203: according to the formulaCalculating information entropy of attribute features, whereinRepresenting attribute X i At F j The number of samples;
step 204: calculating an attribute X i Take value to fuzzy class F j The underlying intended base probability distribution of each sample [ m (Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Couple [ m (Y) 1 )=1],[m(Y 2 )=1],…,[m(Y w )=1]Evidence distance (d) 1 ,d 2 ,…,d w ) The evidence distance calculation formula is that And->Representing two basic probability distributions m 1 And m 2 Vectors of constitution>Is 2 N ×2 N Matrix of->Corresponding to m 1 The column label corresponds to m 2 ,/>Each element of +.>A and B are each m 1 And m 2 The focal elements in (1-d) are used to calculate the similarity 1 ),(1-d 2 ),…,(1-d w )]Calculate attribute X i Take value to fuzzy class F j Mean and variance of similarity of all samples [ (mu) 11 ),(μ 22 ),…,(μ ww )];
Step 205: according to the formulaCalculating the mean value after variance weighting, and normalizing to obtain (U 1 ,U 2 ,…,U w ) According to the formula->Calculating an attribute X i Information entropy of intention;
step 206: according to the formulaCalculating the information gain ratio of the ith attribute, calculating the information gain ratio of all v attributes, and selecting the attribute with the largest information gain ratio as the splitting attribute of the credibility decision tree;
step 207: repeating steps 201-206 each time a split attribute is selected until only one training sample or a plurality of samples of the same class remain, as a basic probability distribution of the intention of the leaf nodes to store the samples;
thirdly, constructing an evidence forest based on a credibility decision tree:
step 301: by adopting a repeated sampling technology, each tree of the training evidence forest is a different training sample, so that overfitting is avoided;
step 302: for the training samples obtained in the step 301, selecting m1 attributes from the attributes with the number v, wherein the m1 attributes are randomly selected without replacement, and the selected attributes are used as training samples of a training reliability decision tree;
step 303: setting the number N of credibility decision trees in the evidence forest, and establishing a plurality of credibility decision trees to form the evidence forest according to the step two;
step 304: calculating the weight of each tree in the evidence forest, wherein each credibility decision tree can obtain the recognition result ms (l) = [ m ] of the training sample s '(Y 1 ),m s '(Y 2 ),…,m s '(Y 1 ,Y 2 ,…,Y w )] l L=1, 2, …, N, and the recognition result ms (l) and the sample true intention [ m (Y) 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Evidence distance (D) 1 ,D 2 ,…,D N ) Calculate the similarity [ (1-D) 1 ),(1-D 2 ),…,(1-D N )]Calculating the average similarity of each tree to all samples
Step 305: according to the formulaNormalized average similarity, according to the formulaCalculating a likelihood measure according to the formula +.>Calculate the phase necessity measure, A 1 ∈[1,2,…,(1,2),(1,3),…,(1,2,…,N)]Representing a set of confidence decision trees;
step 306: solving the weight W of each credibility decision number l From the likelihood measure and the necessity measure obtained in step 305, a weight constraint q (a 2 )≤[W(o 1 )+W(o 2 )+…+W(o N )]≤p(A 2 ),W(o 1 )+W(o 2 )+…+W(o N ) =1, wherein a 2 =[(o 1 ),(o 2 ),…,(o N )]Fusing the identification results of all treesCalculating the probability distribution of intent among Ms BetP [ Ms (j) according to the formula in step 201]According to the formula->Calculating the information entropy of Ms, solving for the optimal weight (W) that minimizes Info (Ms) under constraint conditions 1 ,W 2 ,…W N ) min[Info(Ms)] As a weight for each tree;
fourthly, performing intention recognition by using the constructed evidence forest:
step 401: processing the original sample to be tested in step 102 to obtain a processed sampleWherein each attribute->Is a basic probability distribution;
step 402: expanding an identification framework of attributes intoExpanding a sample to be tested into +.>Individual joint focal element distributionBasic probability distribution value of joint focal element +.>Each combined focal element->Traversing under the generated credibility decision tree to obtain probability distribution m of intention of corresponding leaf node Y =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]Let->|m Y The number of intent probability distributions under the leaf node is represented by I;
step 403: according to the formula Bel [ m ] T ](Y p )=∑Bel[m χ Y ](Y p )*m(χ),χ∈Θ F Calculating a sample m to be measured T Regarding intention Y p Confidence function values of (a), i.e. m in leaf nodes corresponding to the confidence decision tree for each joint focal element χ Y To intention Y p Confidence function value Bel [ m ] χ Y ](Y p ) The product of the basic probability distribution value m (χ) of each joint focal element and the multiple joint focal elements are summed to obtain a sample m to be measured T To intention Y p If a joint focal element corresponds to a plurality of leaf nodes in the confidence decision tree, a plurality of leaf nodesPoint pair Y p Confidence function value Bel [ m ] χ Y ](Y p )=Bel{m[χ 1 ] Y }(Y p )∨Bel{m[χ 2 ] Y }(Y p ) Wherein V.sub.represents the formulaConfidence function calculation formula is Bel (A 4 )=∑m(B 2 ),/>
Step 404: obtaining a sample m to be tested according to step 403 T Confidence function Bel [ m ] for intent T ](Y p ),Y p ∈[Y 1 ,Y 2 ,…,(Y 1 ,Y 2 ),…,(Y 1 ,Y 2 ,…,Y w )]According to the formula
Converting the confidence function into a basic probability distribution t (Y) l =[m(Y 1 ),m(Y 2 ),…,m(Y 1 ,Y 2 ,…,Y w )]L=1, 2, …, N, according to the formula +.>Fusing the identification result of each tree;
step 405: according to the formulaSelf-combining the fusion result N-1 times, whereinRepresenting evidence combination formula +.>Wherein->
CN202110605856.1A 2021-06-01 2021-06-01 Evidence forest-based information uncertainty condition target intention recognition method Active CN113177615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110605856.1A CN113177615B (en) 2021-06-01 2021-06-01 Evidence forest-based information uncertainty condition target intention recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110605856.1A CN113177615B (en) 2021-06-01 2021-06-01 Evidence forest-based information uncertainty condition target intention recognition method

Publications (2)

Publication Number Publication Date
CN113177615A CN113177615A (en) 2021-07-27
CN113177615B true CN113177615B (en) 2024-01-19

Family

ID=76927198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110605856.1A Active CN113177615B (en) 2021-06-01 2021-06-01 Evidence forest-based information uncertainty condition target intention recognition method

Country Status (1)

Country Link
CN (1) CN113177615B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871138A (en) * 2017-11-01 2018-04-03 电子科技大学 A kind of target intention recognition methods based on improvement D S evidence theories
WO2018107492A1 (en) * 2016-12-16 2018-06-21 深圳大学 Intuitionistic fuzzy random forest-based method and device for target tracking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012138855A1 (en) * 2011-04-05 2012-10-11 The Trustees Of Columbia University In The City Of New York B-matching using sufficient selection belief propagation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107492A1 (en) * 2016-12-16 2018-06-21 深圳大学 Intuitionistic fuzzy random forest-based method and device for target tracking
CN107871138A (en) * 2017-11-01 2018-04-03 电子科技大学 A kind of target intention recognition methods based on improvement D S evidence theories

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A new method to air target threat evaluation based on Dempster-Shafer evidence theory";Haibin Liu et al.;《2018 Chinese Control And Decision Conference (CCDC)》;全文 *
"基于置信规则库和证据推理的空中目标意图识别方法";赵福均等;《电光与控制》;第24卷(第8期);全文 *

Also Published As

Publication number Publication date
CN113177615A (en) 2021-07-27

Similar Documents

Publication Publication Date Title
CN108182427B (en) Face recognition method based on deep learning model and transfer learning
Basha et al. Impact of Gradient Ascent and Boosting Algorithm in Classification.
US7362892B2 (en) Self-optimizing classifier
CN108304316B (en) Software defect prediction method based on collaborative migration
CN111368885A (en) Aero-engine gas circuit fault diagnosis method based on deep learning and information fusion
US20200143209A1 (en) Task dependent adaptive metric for classifying pieces of data
CN111832580B (en) SAR target recognition method combining less sample learning and target attribute characteristics
CN111191033A (en) Open set classification method based on classification utility
CN114419379A (en) System and method for improving fairness of deep learning model based on antagonistic disturbance
CN113177615B (en) Evidence forest-based information uncertainty condition target intention recognition method
Narayanan et al. On challenges in unsupervised domain generalization
CN107229944B (en) Semi-supervised active identification method based on cognitive information particles
CN115859115A (en) Intelligent resampling technology based on Gaussian distribution
Li et al. Feature ranking-guided fuzzy rule interpolation
CN115049870A (en) Target detection method based on small sample
CN115018006A (en) Dempster-Shafer framework-based classification method
Bissmark et al. The sparse data problem within classification algorithms: The effect of sparse data on the naïve Bayes algorithm
Li et al. Study on the Prediction of Imbalanced Bank Customer Churn Based on Generative Adversarial Network
Wesołowski et al. Time Series Classification Based on Fuzzy Cognitive Maps and Multi-Class Decomposition with Ensembling
Rahman et al. Evaluating the Performance of Common Machine Learning Classifiers using various Validation Methods
Zhu et al. Incorporating LLM Priors into Tabular Learners
Wu et al. A comprehensive modeling method of continuous and discrete variables for personal credit forecasting
CN117668701B (en) AI artificial intelligence machine learning system and method
Rehman et al. Plants disease classification using deep learning
CN110909238B (en) Association mining algorithm considering competition mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant