CN111563596A - Uncertain information reasoning target identification method based on evidence network - Google Patents
Uncertain information reasoning target identification method based on evidence network Download PDFInfo
- Publication number
- CN111563596A CN111563596A CN202010320253.2A CN202010320253A CN111563596A CN 111563596 A CN111563596 A CN 111563596A CN 202010320253 A CN202010320253 A CN 202010320253A CN 111563596 A CN111563596 A CN 111563596A
- Authority
- CN
- China
- Prior art keywords
- space
- plausibility
- function
- probability distribution
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/048—Fuzzy inferencing
Abstract
The invention discloses an uncertain information reasoning target identification method based on an evidence network, and relates to the field of target identification. The method comprises the steps of establishing a new evidence network reasoning model by taking known information as a determined space and information to be determined as an uncertain space, generating a joint plausibility function according to reasoning information in a plausibility form, generating a joint basic probability distribution function according to a minimum specificity principle, and generating probability distribution by using a basic probability distribution function obtained by integrating the joint basic probability distribution function to realize target identification. The evidence network reasoning model provided by the invention can carry out reasoning under the condition that the known information and the reasoning information are incomplete; the information plausible form representation mode provided by the invention well realizes the processing of fuzzy information; the target identification method provided by the invention can realize identification of the type of the flying object.
Description
Technical Field
The invention belongs to the field of target identification, and particularly relates to an uncertain information reasoning target identification method based on an evidence network.
Background
The target identification is the important content of modern air defense battles, and the reliability and the fineness of the identification result are one of the important marks of the advanced degree of an air defense system. The target identification is mainly to perform fusion reasoning according to target characteristic information obtained by various sensors to obtain accurate description of a target.
In recent years, the main direction of the target identification field at home and abroad is to research how to extract feature quantities of targets, distinguish different targets by using the feature quantities, and generally perform fusion identification by using multiple sensors in order to obtain more complete description of the targets and improve the identification accuracy. Common identification methods include DS evidence theory, bayesian criterion, neural network, expert system, etc. However, these methods usually require a uniform recognition framework, complete prior probability, etc., which are more demanding. Due to factors such as severe battlefield environment and sensor diversity, data obtained by the sensors are discrete and continuous, information is fuzzy and uncertain, and different levels of information obtained by different types of sensors are difficult to represent by a uniform identification framework, so that aerial target identification is an uncertain reasoning process. In this case, the conventional probabilistic reasoning algorithm for discrete variables or linear gaussian continuous variables is limited. Evidence networks have gained widespread attention because they can combine multiple evidences for uncertainty expression and reasoning. The evidence network is one of the most effective theoretical models in the uncertain knowledge expression and reasoning field at present. Due to the fact that DS evidence theory has excellent performance in the aspect of uncertain knowledge representation, the theory and application of DS evidence theory are rapidly developed in recent years, and the DS evidence theory plays an important role in the aspects of multi-sensor information fusion, medical diagnosis, military command and target identification.
Evidence theory has many advantages, and the uncertain information appearing in the sensor signal of the equipment can be better processed by applying the evidence theory in target identification.
Disclosure of Invention
In order to realize target identification, the invention provides a method for identifying a target based on an uncertain information reasoning technology of an evidence reasoning network. By the method, uncertain information in the sensor signal can be better processed, and the identified target can be accurately judged.
The invention aims to solve the technical problem that the equipment sensor signal has certain uncertainty in target identification, and the adopted technical scheme comprises the following steps:
the method comprises the following steps: determining null in an input evidence reasoning networkIn a room EeThe determined space is a father node identification frame set of the evidence reasoning network, wherein the father node identification frame set comprises k identification frames theta i1, 2.., k. The basic probability distribution function is defined in evidence theory as any one belonging to Θ ═ F1,F2,...FnSubset A, m (A) ∈ [0,1 ]]And satisfyThen m is 2ΘA basic probability distribution function of wherein 2ΘIn order to identify the power set of the frame,taking a child node identification framework set in the evidence reasoning network as an uncertain space EsThus, a complete evidence reasoning network is constructed.
Step two: will determine the space EeThe basic probability distribution function is converted into a plausibility function, and the method for converting the basic probability distribution function into the plausibility function comprises the following steps:
using formulasThe probability distribution functions of the same subset are summed to obtain the plausibility function, i.e. the maximum probability of occurrence of the subset.
Step three: input determination space EeTo an uncertain space EsThe inference information is in the form of a conditional plausibility functionAnd (5) performing representation to obtain a condition plausibility table. When the inference information is expressed by the form of conditional probability distribution function, the formula is requiredConverting the data into a conditional plausibility function for representation. The conditional plausibility function may also be incomplete when the inference information is incomplete.
Step four: according to a determined space EeUncertain space EsAnd a conditional plausibility function between them determines the recognition space ErThe method for determining the identification space comprises the following steps:
when space E is determinedeElement (E) and uncertainty space EsWhen the elements in (1) are incompatible, they cannot be givenSuch a conditional plausibility function makes inference impossible, and therefore a recognition space E needs to be determinedr. Each element in the recognition space is from the determination space EeAnd do not determine space EsCan be in the recognition space ErIs identified. Representing all plausibility functionsAll of the conditional elements in (1).
Step five: calculating a combined plausibility function through the established evidence reasoning network and the conditional plausibility function, wherein the combined plausibility function is Er×EsAbove plausibility function Plsr(A × B) the joint plausibility function calculation method is that a formula is usedTo calculate Er×EsA joint plausibility function of whereinMultiplying the known definite space plausibility function with the conditional plausibility table correspondingly, and dividing by the identification space plausibility value to eliminate EeElement (E) and uncertainty space EsIs incompatible with the influence of the elements in (1).
Step six: using the principle of least specificity to combine plausibility functions Plsr(A × B) into a joint probability distribution function msr(a × B), the specificity of the probability distribution function is calculated as:in the conversion process, the specificity needs to be minimized, and the algorithm used for converting the joint plausibility function into the joint probability distribution function is as follows:
(1) initializing m (·) such that m (e) is 1;
(2) sequentially considering the number of the elements contained in B in descending orderThe relation between the two sets of elements and the known plausibility function does not consider the sequence between the two sets of elements if the two sets of elements have the same number;
(4) If ΔjTo the next B when 0jCalculating;
(6) If Δj>m(Ai) Mixing m (A)i) Is distributed to Ai-BjTo then recalculate ΔjIf Δ isj>m(Ai) Then m (A) with the same operationi) And (6) distributing.
If Δj=m(Ai) Mixing m (A)i) Is distributed to Ai-BjUp, to the next BjCalculating;
if Δj<m(Ai) Will be deltajIs distributed to Ai-BjTo a, aiUpper residue (m (A)i)-Δj);
(7) When all known plausibility functions are BjM (-) obtained after all the considerations are taken into account is what we areA required joint probability distribution function;
step seven: according to the formulaWill calculate the result Ee×EsProjecting the joint probability distribution function on the space to obtain the space E in the uncertain spacesThe probability distribution function of (1).
Step eight: according to the formulaWill calculate the resulting uncertainty space EsThe above probability distribution function translates into a probability distribution. Judging the recognition result according to the obtained probability result if P (A)i) If the maximum probability of (A) is greater than 0.5, then P (A) is selectedi) And the category corresponding to the medium maximum probability is taken as a target identification result (otherwise, the target category cannot be judged).
Compared with the prior art, the invention has the following beneficial effects:
1. the invention has simple steps, reasonable design and convenient realization, use and operation.
2. The invention expands the compatibility of the evidence network to the information uncertainty of the father node.
3. The method and the device can effectively infer and identify the target type under the condition that the known information is limited.
4. The information plausible form representation mode provided by the invention can effectively realize the processing of fuzzy information.
In conclusion, the technical scheme of the invention has reasonable design, expresses the known target information and inference based on the plausibility function, utilizes the conditional plausibility formula to carry out target inference and identification, expands the compatibility of information uncertainty, realizes the effective processing of fuzzy information, and can effectively identify the target type under the condition of limited known information.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a general flow chart of an implementation of the present invention.
The embodiment of fig. 2 constructs an inference network.
The condition plausibility table obtained in the embodiment of fig. 3.
Detailed Description
The invention is further illustrated with reference to the following figures and examples. An example of a sensor identifying a model of an airspace aircraft is given herein. The sensors are capable of detecting two attributes of the aircraft, respectively the flying speed S and the flying height H. The model T of the airplane is inferred through the two attributes, and the implementation steps of the proposed target identification are explained.
It should be noted that, in the present application, the embodiments and the attributes of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Spatially relative terms, such as "above … …," "above … …," "above … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
As shown in the attached figure 1, the uncertain information reasoning target identification method based on the evidence network comprises the following steps:
the method comprises the following steps: deterministic space E in input evidence reasoning networkseThe determined space is a father node identification frame set of the evidence reasoning network, wherein the father node identification frame set comprises 2 identification frames thetas={FS,LS},ΘhLA, HA. FS, LS represent high and low speed flight, respectively, LA, HA represent high and low altitude flight, respectively. The basic probability distribution function is defined in evidence theory as any one belonging to Θ ═ F1,F2,...FnSubset A, m (A) ∈ [0,1 ]]And satisfyThen m is 2ΘA basic probability distribution function of wherein 2ΘIn order to identify the power set of the frame,reasoning evidence about a networkThe child node identification framework set is used as an uncertain space EsIncluding 1 recognition frame thetatThe AS and the GA represent an air-high aircraft and a ground-attack aircraft, respectively. From this, a complete evidence reasoning network is constructed as shown in fig. 2.
The two sets of basic probability distribution functions over the input determination space are as follows:
m(FS)=0.6m(LS)=0.3m(FS,LS)=0.1
m(HA)=0.8m(LA)=0.1m(HA,LA)=0.1
step two: will determine the space EeThe basic probability distribution function is converted into a plausibility function, and the method for converting the basic probability distribution function into the plausibility function comprises the following steps:
using formulasThe probability distribution functions of the same subset are summed to obtain the plausibility function, i.e. the maximum probability of occurrence of the subset. The following results were obtained:
Pl(FS)=m(FS)+m(FS,LS)=0.7 Pl(LS)=m(LS)+m(FS,LS)=0.4
Pl(FS,LS)=m(FS)+m(LS)+m(FS,LS)=1 Pl(HA)=m(HA)+m(HA,LA)=0.9
Pl(HA,LA)=m(HA)+m(LA)+m(HA,LA)=1 Pl(LA)=m(LA)+m(HA,LA)=0.2
step three: input determination space EeTo an uncertain space EsThe inference information is in the form of a conditional plausibility functionAnd (5) performing representation to obtain a condition plausibility table. When the inference information is expressed by the form of conditional probability distribution function, the formula is requiredConverting the data into a conditional plausibility function for representation. The conditional plausibility function may also be incomplete when the inference information is incomplete. In this example, the input reasoning information is a conditional plausibility function, and no further transformation is needed to outputThe entry plausibility table is shown in fig. 3.
Step four: according to a determined space EeUncertain space EsAnd a conditional plausibility function between them determines the recognition space ErThe method for determining the identification space comprises the following steps:
when space E is determinedeElement (E) and uncertainty space EsWhen the elements in (1) are incompatible, they cannot be givenSuch a conditional plausibility function makes inference impossible, and therefore a recognition space E needs to be determinedr. Each element in the recognition space is from the determination space EeAnd do not determine space EsCan be in the recognition space ErIs identified. Er{ FS }, { LS }, { FS, LS }, { HA }, { LA }, { HA, LA } } represent all plausibility functionsAll of the conditional elements in (1).
Step five: calculating a combined plausibility function through the established evidence reasoning network and the conditional plausibility function, wherein the combined plausibility function is Er×EsAbove plausibility function Plsr(A × B) the joint plausibility function calculation method is that a formula is usedTo calculate Er×EsA joint plausibility function of whereinMultiplying the known definite space plausibility function with the conditional plausibility table correspondingly, and dividing by the identification space plausibility value to eliminate EeElement (E) and uncertainty space EsIs incompatible with the influence of the elements in (1).
First pair Ple(Er) And (3) calculating:
by Plsr(AS × (FS, HA)) generation exemplifies the joint plausibility function approach:
Ple(FS,HA)=Pl(FS)×Pl(HA)=0.7×0.9=0.63
the same principle can be obtained according to the above calculation mode:
Plsr((AS,GA)×(FS,HA))=0.63 Plsr((AS,GA)×(LS,HA))=0.36
Plsr((AS,GA)×(LS,LA))=0.08 Plsr((AS,GA)×(FS,LA))=0.14
Plsr(GA×(FS,HA))=0.063 Plsr(AS×(LS,HA))=0.288
Plsr(GA×(LS,HA))=0.108 Plsr(AS×(LS,LA))=0.08
Plsr(GA×(LS,LA))=0.064
step six: using the principle of least specificity to combine plausibility functions Plsr(A × B) into a joint probability distribution function msr(a × B), the specificity of the probability distribution function is calculated as:the specificity should be minimized during transformation, as Plsr(AS, GA) × (FS, HA)) -0.63 illustrates the conversion of the joint plausibility function into a joint probability distribution function method:
(1) target BPA was initialized: m (E) ═ 1
(2) Since the plausibility function with a large number of elements is prioritized and the order of the same number is not considered, first, Pl is introducedsr((AS,GA)×(FS,HA))=0.63
(3) Calculating Δ, Δ ═ m (e) -Pl for (AS, GA) × (FS, HA)sr((AS,GA)×(FS,HA))=0.36
(5) Since Δ < m (E), Δ is assigned to { E- { (AS, GA) × (FS, HA) } and the remaining (m (E) - Δ) on E is:
msr(E)=0.64,msr((AS,GA)×(FS,LA)∩(AS,GA)×(LS,LA)∩(AS,GA)×(LS,HA))=0.36
(6) for the next BjCalculation according to the minimum specificity algorithm above until all BjThe allocation is complete. After the allocation is completed, the joint probability allocation function is generated as follows:
msr((AS,GA)×(LS,HA))=0.108 msr((AS,GA)×(FS,HA))=0.063
msr(AS×(FS,HA)∩(AS,GA)×(LS,LA))=0.064 msr(AS×(FS,HA))=0.497
msr((AS,GA)×(FS,LA)∩AS×(LS,HA))=0.058 msr((AS,GA)×(FS,LA))=0.072
msr((AS,GA)×(FS,LA)∩AS×(LS,LA))=0.01 msr(AS×(LS,HA))=0.122
msr(AS×(FS,HA)∩AS×(LS,LA))=0.006
step seven: according to the formulaWill calculate the result Ee×EsProjecting the joint probability distribution function on the space to obtain the space E in the uncertain spacesThe probability distribution function of (1). The calculation is as follows:
step eight: according to the formulaWill calculate the resulting uncertainty space EsThe probability distribution function above is converted into a probability form. Judging the recognition result according to the obtained probability result if P (A)i) If the maximum probability of (A) is greater than 0.5, then P (A) is selectedi) And the category corresponding to the medium maximum probability is taken as a target identification result (otherwise, the target category cannot be judged).
Firstly, calculating a plausibility function and a trust function:
Bel(AS)=m(AS)=0.625,Pl(AS)=m(AS)+m(AS+GA)=1
Bel(GA)=0,Pl(GA)=m(AS+GA)=0.375
then, calculating the probability of AS and GA:
because the maximum probability in the probability distribution P is P (AS) and P (AS) is more than 0.5, the identification result judges that the flying object is an air-high plane.
Claims (1)
1. An uncertain information reasoning target identification method based on an evidence network is characterized by comprising the following steps:
the method comprises the following steps: deterministic space E in input evidence reasoning networkseThe determined space is a father node identification frame set of the evidence reasoning network, wherein the father node identification frame set comprises k identification frames thetai1,2,. k; the basic probability distribution function is defined in evidence theory as any one belonging to Θ ═ F1,F2,...FnSubset A, m (A) ∈ [0,1 ]]And satisfyThen m is 2ΘA basic probability distribution function of wherein 2ΘIn order to identify the power set of the frame,taking a child node identification framework set in the evidence reasoning network as an uncertain space EsThus, a complete evidence reasoning network is constructed;
step two: will determine the space EeThe basic probability distribution function is converted into a plausibility function, and the method for converting the basic probability distribution function into the plausibility function comprises the following steps:
using formulasSumming the probability distribution functions of the same subset to obtain a plausibility function, namely the maximum probability of the subset;
step three: input determination space EeTo an uncertain space EsThe inference information is in the form of a conditional plausibility functionPerforming representation to obtain a condition plausibility table; when the inference information is expressed by the form of conditional probability distribution function, the formula is requiredConverting the data into a conditional plausibility function for representation; when the reasoning information is incomplete, the conditional plausibility function can also be incomplete;
step four: according to a determined space EeUncertain space EsAnd a conditional plausibility function between them determines the recognition space ErThe method for determining the identification space comprises the following steps:
when space E is determinedeElement (E) and uncertainty space EsWhen the elements in (1) are incompatible, they cannot be givenSuch a conditional plausibility functionThen, inference can not be made, so that a recognition space E needs to be determinedr(ii) a Each element in the recognition space is from the determination space EeAnd do not determine space EsCan be in the recognition space ErIs identified; representing all plausibility functionsAll of the conditional elements in (1);
step five: calculating a combined plausibility function through the established evidence reasoning network and the conditional plausibility function, wherein the combined plausibility function is Er×EsAbove plausibility function Plsr(A × B) and the joint plausibility function calculation method is that the formula is usedTo calculate Er×EsA joint plausibility function of whereinMultiplying the known definite space plausibility function with the conditional plausibility table correspondingly, and dividing by the identification space plausibility value to eliminate EeElement (E) and uncertainty space EsElement incompatibility effects in (1);
step six: using the principle of least specificity to combine plausibility functions Plsr(A × B) into a joint probability distribution function msr(a × B), the specificity of the probability distribution function is calculated as:in the conversion process, the specificity needs to be minimized, and the algorithm used for converting the joint plausibility function into the joint probability distribution function is as follows:
(1) initializing m (·) such that m (e) is 1;
(2) sequentially considering the number of the elements contained in B in descending orderThe relation between the two sets of elements and the known plausibility function does not consider the sequence between the two sets of elements if the two sets of elements have the same number;
(4) If ΔjTo the next B when 0jCalculating;
(6) If Δj>m(Ai) Mixing m (A)i) Is distributed to Ai-BjTo then recalculate ΔjIf Δ isj>m(Ai) Then m (A) with the same operationi) Distributing;
if Δj=m(Ai) Mixing m (A)i) Is distributed to Ai-BjUp, to the next BjCalculating;
if Δj<m(Ai) Will be deltajIs distributed to Ai-BjTo a, aiUpper residue (m (A)i)-Δj);
(7) When all known plausibility functions are BjM (-) obtained after all the consideration is the joint probability distribution function needed by us;
step seven: according to the formulaWill calculate the result Ee×EsProjecting the joint probability distribution function on the space to obtain the space E in the uncertain spacesA probability distribution function of (a);
step eight: according to the formulaWill calculate the resulting uncertainty space EsThe above probability distribution function translates into a probability distribution. Judging the recognition result according to the obtained probability result if P (A)i) If the maximum probability of (A) is greater than 0.5, then P (A) is selectedi) And the category corresponding to the medium maximum probability is taken as a target identification result (otherwise, the target category cannot be judged).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010320253.2A CN111563596B (en) | 2020-04-22 | 2020-04-22 | Uncertain information reasoning target identification method based on evidence network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010320253.2A CN111563596B (en) | 2020-04-22 | 2020-04-22 | Uncertain information reasoning target identification method based on evidence network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111563596A true CN111563596A (en) | 2020-08-21 |
CN111563596B CN111563596B (en) | 2022-06-03 |
Family
ID=72074284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010320253.2A Active CN111563596B (en) | 2020-04-22 | 2020-04-22 | Uncertain information reasoning target identification method based on evidence network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111563596B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112733915A (en) * | 2020-12-31 | 2021-04-30 | 大连大学 | Situation estimation method based on improved D-S evidence theory |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102393912A (en) * | 2011-11-01 | 2012-03-28 | 中国电子科技集团公司第二十八研究所 | Comprehensive target identification method based on uncertain reasoning |
US20130110568A1 (en) * | 2011-11-02 | 2013-05-02 | International Business Machines Corporation | Assigning work orders with conflicting evidences in services |
CN103577707A (en) * | 2013-11-15 | 2014-02-12 | 上海交通大学 | Robot failure diagnosis method achieved by multi-mode fusion inference |
CN104331582A (en) * | 2014-11-24 | 2015-02-04 | 云南师范大学 | D-S evidence theory-based method for building urban land expansion simulation model |
CN106056163A (en) * | 2016-06-08 | 2016-10-26 | 重庆邮电大学 | Multi-sensor information fusion object identification method |
CN107871138A (en) * | 2017-11-01 | 2018-04-03 | 电子科技大学 | A kind of target intention recognition methods based on improvement D S evidence theories |
CN107967449A (en) * | 2017-11-13 | 2018-04-27 | 西北工业大学 | A kind of multispectral image unknown object recognition methods based on broad sense evidence theory |
CN108985219A (en) * | 2018-07-11 | 2018-12-11 | 河海大学常州校区 | A kind of road collision alerts system and method based on multisource information fusion technology |
-
2020
- 2020-04-22 CN CN202010320253.2A patent/CN111563596B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102393912A (en) * | 2011-11-01 | 2012-03-28 | 中国电子科技集团公司第二十八研究所 | Comprehensive target identification method based on uncertain reasoning |
US20130110568A1 (en) * | 2011-11-02 | 2013-05-02 | International Business Machines Corporation | Assigning work orders with conflicting evidences in services |
CN103577707A (en) * | 2013-11-15 | 2014-02-12 | 上海交通大学 | Robot failure diagnosis method achieved by multi-mode fusion inference |
CN104331582A (en) * | 2014-11-24 | 2015-02-04 | 云南师范大学 | D-S evidence theory-based method for building urban land expansion simulation model |
CN106056163A (en) * | 2016-06-08 | 2016-10-26 | 重庆邮电大学 | Multi-sensor information fusion object identification method |
CN107871138A (en) * | 2017-11-01 | 2018-04-03 | 电子科技大学 | A kind of target intention recognition methods based on improvement D S evidence theories |
CN107967449A (en) * | 2017-11-13 | 2018-04-27 | 西北工业大学 | A kind of multispectral image unknown object recognition methods based on broad sense evidence theory |
CN108985219A (en) * | 2018-07-11 | 2018-12-11 | 河海大学常州校区 | A kind of road collision alerts system and method based on multisource information fusion technology |
Non-Patent Citations (4)
Title |
---|
HAIBIN LIU 等: "A new method to air target threat evaluation based on Dempster-Shafer evidence theory", 《2018 CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》 * |
PINGPING WANG 等: "Research on the fuzzy set theory of evidence fusion algorithm with time-varying in multi-sensor detection network", 《2014 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATIONS AND COMPUTING (ICSPCC)》 * |
徐翻翻: "基于D-S证据理论的模式分类问题的研究", 《中国硕士学位论文全文数据库 信息科技辑》 * |
蒋雯 等: "D-S证据理论中的基本概率赋值转换概率方法研究", 《西北工业大学学报》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112733915A (en) * | 2020-12-31 | 2021-04-30 | 大连大学 | Situation estimation method based on improved D-S evidence theory |
CN112733915B (en) * | 2020-12-31 | 2023-11-07 | 大连大学 | Situation estimation method based on improved D-S evidence theory |
Also Published As
Publication number | Publication date |
---|---|
CN111563596B (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2020104839A (en) | System and method for generating aircraft fault prediction classifier | |
US20230208719A1 (en) | Distributed secure state reconstruction method based on double-layer dynamic switching observer | |
CN110033028B (en) | Conflict evidence fusion method based on arithmetic mean closeness | |
Azimirad et al. | A COMPREHENSIVE REVIEW OF THE MULTI-SENSOR DATA FUSION ARCHITECTURES. | |
CN112766595B (en) | Command control device, method, system, computer equipment and medium | |
CN109000651B (en) | Path planning method and path planning device | |
Xiang et al. | V2xp-asg: Generating adversarial scenes for vehicle-to-everything perception | |
CN109657928B (en) | Cooperative scheduling method of closed-loop cooperative scheduling framework of vehicle-mounted sensor system | |
CN113569465B (en) | Flight path vector and target type joint estimation system and estimation method based on deep learning | |
CN108810799B (en) | Multi-floor indoor positioning method and system based on linear discriminant analysis | |
CN111563596B (en) | Uncertain information reasoning target identification method based on evidence network | |
CN114679729B (en) | Unmanned aerial vehicle cooperative multi-target detection method integrating radar communication | |
Ghosh et al. | A deep ensemble method for multi-agent reinforcement learning: A case study on air traffic control | |
Li et al. | A lightweight and explainable data-driven scheme for fault detection of aerospace sensors | |
CN110825112A (en) | Oil field dynamic invasion target tracking system and method based on multiple unmanned aerial vehicles | |
Shafik et al. | A reawakening of machine learning application in unmanned aerial vehicle: future research motivation | |
CN116700340A (en) | Track planning method and device and unmanned aerial vehicle cluster | |
Liu et al. | SMART: Vision-based method of cooperative surveillance and tracking by multiple UAVs in the urban environment | |
Chen et al. | Vision-based formation control of multiple UAVs with event-triggered integral sliding mode control | |
CN112418309B (en) | Electromagnetic compatibility management and control method prediction method based on machine learning | |
CN115494729A (en) | Time-lag allowable formation control method and device for second-order multi-agent system | |
Wang et al. | A decentralized decision-making algorithm of UAV swarm with information fusion strategy | |
CN115167451A (en) | Discrete heterogeneous multi-autonomous-body formation enclosure tracking control method and system | |
Odonkor et al. | A distributed intelligence approach to using collaborating unmanned aerial vehicles for oil spill mapping | |
Yan et al. | Event-triggered time-varying formation control for discrete-time multi-agent systems with communication delays |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |