CN114841082A - Evaluation method of target classification model for identifying model of unmanned aerial vehicle - Google Patents

Evaluation method of target classification model for identifying model of unmanned aerial vehicle Download PDF

Info

Publication number
CN114841082A
CN114841082A CN202210765323.4A CN202210765323A CN114841082A CN 114841082 A CN114841082 A CN 114841082A CN 202210765323 A CN202210765323 A CN 202210765323A CN 114841082 A CN114841082 A CN 114841082A
Authority
CN
China
Prior art keywords
model
classification
intuitive
evaluation
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210765323.4A
Other languages
Chinese (zh)
Inventor
李佳航
闫梦龙
王晓明
周新鹏
回新强
孙景波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sapai Intelligent Technology Co ltd
Original Assignee
Sapai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sapai Intelligent Technology Co ltd filed Critical Sapai Intelligent Technology Co ltd
Priority to CN202210765323.4A priority Critical patent/CN114841082A/en
Publication of CN114841082A publication Critical patent/CN114841082A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an evaluation method of a target classification model for identifying the model of an unmanned aerial vehicle. The method combines point gray on the basis of intuitive blur, constructs the separation degree of gray intuitive blur, provides a gray intuitive blur evaluation model with universality, and provides a target classification network model evaluation method based on the model. Constructing a hierarchical analysis structure influencing the evaluation of a target classification model; determining a target classification model to be evaluated; calculating weight vector of grey intuitive fuzzy evaluation model
Figure DEST_PATH_IMAGE001
(ii) a Calculating an evaluation matrix of a gray intuitive fuzzy evaluation model
Figure 865614DEST_PATH_IMAGE002
(ii) a Calculate the first
Figure DEST_PATH_IMAGE003
Relative closeness of candidate solutions

Description

Evaluation method of target classification model for identifying model of unmanned aerial vehicle
Technical Field
The invention relates to an evaluation method of a target classification model for identifying the model of an unmanned aerial vehicle, which is used for identifying a plurality of target classification methods for identifying the model of the unmanned aerial vehicle based on gray intuitive fuzzy and belongs to the technical field of computer vision.
Background
In recent years, the development of computer vision has been dramatically advanced. Typically, great progress is made in the classification direction of targets, and a target classification network based on a deep neural network initially forms an application system in the fields of video monitoring, automatic driving, intelligent medical treatment and the like. The target classification network is trained on a limited set of sample data and then the effectiveness of the model is judged by testing on a test data set. In machine learning, the final model is typically selected after evaluating a plurality of candidate models, a process called model selection.
Generally, simply comparing test results does not effectively reflect the generalization ability of the model in practical applications. On the one hand, the problem of having more dimensions and small data can lead to overfitting; on the other hand, if the data set in the real project is usually randomly divided into a training set and a testing set, and the model is trained and evaluated on the data set accordingly, the data distribution of the testing set and the training set is the same, because they are sampled from data with similar scene content and imaging conditions, but in practical application, the testing image may come from a data distribution different from that of the training.
In the process of model evaluation, on one hand, for the acquired image information, the information has dynamics and diversity, so that the information contains components with unclear concepts, and the properties are called as fuzziness; on the other hand, due to the limitation of the test data, the amount of information provided is insufficient, the judgment basis is insufficient, and grayness is generated.
The grey model and the intuitive fuzzy set based on grey theory are suitable for describing and analyzing the problem of uncertainty and ambiguity. Because the intuition fuzzy set considers the information of both membership degree and non-membership degree, the capability of processing information is stronger than that of the fuzzy set, and the description of uncertainty is more practical; the grey system theory focuses on researching and solving the uncertainty problem of small sample and poor information.
Disclosure of Invention
In view of the above, the present invention aims to provide an evaluation method for a target classification model for identifying a model of an unmanned aerial vehicle, which evaluates a plurality of target classification methods for identifying a model of an unmanned aerial vehicle based on gray intuitive blur, thereby selecting an optimal model. The method combines point gray scale on the basis of intuitive fuzzy, provides a grey intuitive fuzzy evaluation model with universality, and provides a target classification network model evaluation method based on the model.
In order to achieve the purpose, the invention adopts the following technical scheme:
the evaluation method for identifying the target classification model of the unmanned aerial vehicle model is characterized by being based on gray intuitive fuzzy and specifically comprising the following steps of:
step one, constructing a hierarchical analysis structure influencing the evaluation of a target classification model.
The hierarchical analysis structure comprises an index layer and a factor layer, and the factor set influencing the final evaluation of the model is
Figure 421368DEST_PATH_IMAGE001
The index layer comprises classification accuracy and model reasoning speed, and the factor set comprises the classification accuracy of each classification model to different classification categories
Figure 338508DEST_PATH_IMAGE002
And each model inference time
Figure 118245DEST_PATH_IMAGE003
Step two, determining a target classification model to be evaluated, wherein each classification model forms a scheme set
Figure 513455DEST_PATH_IMAGE004
Step three, calculating a weight vector of the grey intuitive fuzzy evaluation model
Figure 327827DEST_PATH_IMAGE005
Figure 150289DEST_PATH_IMAGE006
(1)
Wherein the content of the first and second substances,
Figure 417323DEST_PATH_IMAGE007
is shown as
Figure 550976DEST_PATH_IMAGE008
The weight of each of the evaluation factors is,
Figure 219855DEST_PATH_IMAGE009
to represent
Figure 213219DEST_PATH_IMAGE010
Is determined by using an analytic hierarchy process
Figure 967548DEST_PATH_IMAGE011
The weight part of (2);
step four, calculating an evaluation matrix of the grey intuitive fuzzy evaluation model
Figure 704560DEST_PATH_IMAGE012
Figure 227945DEST_PATH_IMAGE013
(2)
The grey intuitive fuzzy evaluation matrix is defined in space
Figure 595473DEST_PATH_IMAGE014
In (1),
Figure 837098DEST_PATH_IMAGE015
represents a set of factors that are representative of,
Figure 112222DEST_PATH_IMAGE016
represents a set of protocols in which
Figure 755693DEST_PATH_IMAGE017
Represents an intuitive fuzzy number that is,
Figure 90859DEST_PATH_IMAGE018
representing the gray scale of the point, evaluating the matrix by the gray intuitive fuzzy number
Figure 819781DEST_PATH_IMAGE019
Forming;
the method for determining the intuitive fuzzy number comprises the following steps: given a
Figure 898595DEST_PATH_IMAGE020
Each evaluation subject gives the scoring value of each scheme under each factor to obtain
Figure 334256DEST_PATH_IMAGE020
Scoring matrixes, performing dimensionless transformation on each scoring matrix, dividing the dimensionless transformed scoring interval into three subsets which respectively represent three attitudes of subjective support, neutral and object,
Figure 105903DEST_PATH_IMAGE020
the probability that the group score falls within the three subsets is the membership of the scheme to that factor
Figure 56541DEST_PATH_IMAGE021
Degree of hesitation
Figure 939046DEST_PATH_IMAGE022
And degree of non-membership
Figure 291530DEST_PATH_IMAGE023
Step five, analyzing and evaluating matrix
Figure 234079DEST_PATH_IMAGE024
Screening out positive optimal solutions and negative optimal solutions, and calculating to obtain incidence matrixes of the candidate schemes and the positive optimal solutions and incidence matrixes of the candidate schemes and the negative optimal solutions, wherein the method specifically comprises the following steps:
the positive optimal solution is
Figure 609696DEST_PATH_IMAGE025
Wherein
Figure 30313DEST_PATH_IMAGE026
The negative optimal solution is
Figure 502883DEST_PATH_IMAGE027
Wherein
Figure 616332DEST_PATH_IMAGE028
First, the
Figure 541563DEST_PATH_IMAGE029
The incidence matrix of each classification model and the positive optimal solution is as follows:
Figure 765871DEST_PATH_IMAGE030
first, the
Figure 827368DEST_PATH_IMAGE029
The incidence matrix of each classification model and the negative optimal solution is as follows:
Figure 49402DEST_PATH_IMAGE031
first, the
Figure 461929DEST_PATH_IMAGE032
The relative relevance of each candidate scheme to the positive optimal solution is as follows:
Figure 489928DEST_PATH_IMAGE033
(3)
first, the
Figure 671510DEST_PATH_IMAGE032
The relative relevance of each candidate scheme to the negative optimal solution is as follows:
Figure 861183DEST_PATH_IMAGE034
(4)
wherein:
Figure 698689DEST_PATH_IMAGE035
gray scale model called weight
Figure 264800DEST_PATH_IMAGE036
(5)
Figure 504151DEST_PATH_IMAGE037
(6)
Figure 864725DEST_PATH_IMAGE038
To represent
Figure 251844DEST_PATH_IMAGE039
And
Figure 621646DEST_PATH_IMAGE040
the degree of separation of the grey intuitive blur numbers of,
Figure 512241DEST_PATH_IMAGE041
to represent
Figure 309296DEST_PATH_IMAGE042
And
Figure 918132DEST_PATH_IMAGE043
grey intuitive fuzzy degrees of separation of (a);
the separation degree calculation method of the gray intuitive blur is as follows:
for two gray intuitive blur numbers
Figure 29307DEST_PATH_IMAGE044
Balance of Chinese traditional medicine
Figure 39989DEST_PATH_IMAGE045
Is the degree of separation of the grey intuitive blur numbers, wherein,
Figure 7945DEST_PATH_IMAGE046
hamming distance that is an intuitive fuzzy number;
step six, calculating
Figure 104077DEST_PATH_IMAGE047
Relative closeness of candidate solutions
Figure 815681DEST_PATH_IMAGE048
Figure 680869DEST_PATH_IMAGE049
Wherein the content of the first and second substances,
Figure 754479DEST_PATH_IMAGE050
and
Figure 337907DEST_PATH_IMAGE051
in order to be a preference coefficient,
Figure 587623DEST_PATH_IMAGE052
Figure 572897DEST_PATH_IMAGE053
the larger, the
Figure 882655DEST_PATH_IMAGE047
The better the classification model.
Furthermore, the analytic hierarchy process is a simple method for solving the decision problem of complex relation and fuzzy concept, and can select the optimal scheme according to a group of evaluation indexes
Figure 953379DEST_PATH_IMAGE054
Further, in gray intuitive fuzzy number
Figure 6786DEST_PATH_IMAGE055
Is determined by a score function.
Further, the calculation method of the intuitive blur number and the point gray scale is as follows:
1) the intuitive fuzzy number calculation method for the classification accuracy of different classification categories is as follows:
the target classification model uses a Softmax function to output and map classification results to probability values P of each possible classification, after dimensionless transformation, for the class with the label of 1, the result after dimensionless transformation is still P, for the class with the label of 0, the result after dimensionless transformation is 1-P, and the result after dimensionless transformation all falls into the probability values P of each possible classification
Figure 784249DEST_PATH_IMAGE056
Within the interval, the interval
Figure 264909DEST_PATH_IMAGE056
Is divided into
Figure 557350DEST_PATH_IMAGE057
Figure 414448DEST_PATH_IMAGE058
Figure 108734DEST_PATH_IMAGE059
Three subsets of which
Figure 760296DEST_PATH_IMAGE057
Which represents that the result of the classification is wrong,
Figure 805612DEST_PATH_IMAGE058
the tendency of the representative classification result is poor,
Figure 138504DEST_PATH_IMAGE059
representing that the classification result is correct; counting the classification result of each classification model in the test data set
Figure 687297DEST_PATH_IMAGE060
For the factor
Figure 775339DEST_PATH_IMAGE061
Can be given a degree of membership of
Figure 42372DEST_PATH_IMAGE062
To obtain the result of the above-mentioned method,
Figure 241272DEST_PATH_IMAGE063
represents the total number of samples tested and,
Figure 644572DEST_PATH_IMAGE064
representing the number of samples with correct classification result, i.e. the output falls on
Figure 903515DEST_PATH_IMAGE059
The interval adopts the same method to calculate the non-membership degree and the hesitation degree of the intuitionistic fuzzy number;
2) the intuitive fuzzy number calculation method of the model inference speed comprises the following steps:
the upper bound of the inference time of each model is
Figure 595527DEST_PATH_IMAGE065
Model (C)
Figure 332539DEST_PATH_IMAGE066
The time consumption of reasoning is
Figure 855925DEST_PATH_IMAGE067
Model of law
Figure 285769DEST_PATH_IMAGE066
Membership to inferred velocity index
Figure 261815DEST_PATH_IMAGE068
Degree of non-membership
Figure 802518DEST_PATH_IMAGE069
Figure 383672DEST_PATH_IMAGE070
Figure 718838DEST_PATH_IMAGE071
Wherein j =1, …, n;
3) the point gray scale calculation method for the classification accuracy of different classification categories is as follows:
counting a test data set, dividing the test data into a known scene data set and an unknown scene data set, and dividing the types influencing the completeness of sample information into: the number of cameras shot by the test data set, the number of shooting scenes under each camera, the number of samples under each shooting scene, and the expected values are respectively
Figure 447760DEST_PATH_IMAGE072
Figure 526574DEST_PATH_IMAGE073
Figure 24552DEST_PATH_IMAGE074
And expected value
Figure 530620DEST_PATH_IMAGE072
Figure 746837DEST_PATH_IMAGE073
Figure 567026DEST_PATH_IMAGE074
Not less than the maximum statistical value of the test data;
test data set, model
Figure 919510DEST_PATH_IMAGE066
For different categories
Figure 862058DEST_PATH_IMAGE075
The number of the same distribution scene samples participating in the training is
Figure 299992DEST_PATH_IMAGE076
The number of samples of different distribution scene data not participating in training is
Figure 720609DEST_PATH_IMAGE077
Model of law
Figure 193179DEST_PATH_IMAGE066
For the factor
Figure 41049DEST_PATH_IMAGE078
The dot gray scale of (a) is as follows:
Figure 903963DEST_PATH_IMAGE079
wherein i =1, …, m-1; j =1, … n;
4) the point gray scale value of the model inference speed index is 0, namely
Figure 128271DEST_PATH_IMAGE080
The invention has the advantages that:
1. a more universal gray intuitive fuzzy method is established;
2. the ambiguity and the gray in the target classification problem are taken into consideration, and a new evaluation method is established, so that the evaluation of the optimal target classification network model is more reliable;
3. and describing the membership and the non-membership of the classification model by utilizing the intuitive fuzzy subset, and performing target classification model evaluation on the original data set with insufficient information quantity and gray property by combining the advantages of the gray model and the advantages of the intuitive fuzzy subset, wherein the evaluation result is more objective and reliable.
Detailed Description
This embodiment is to civilian unmanned aerial vehicle model identification problem concrete explanation. And evaluating a plurality of target classification models for identifying the model of the unmanned aerial vehicle, and selecting an optimal model from the target classification models.
Wherein, the index layer is divided into classification accuracy and model inference speed. The factor layer corresponding to the classification accuracy specifically divides the airspace target into five types: the method comprises four types of civil unmanned aerial vehicle targets and other airspace moving targets; the upper time limit for neural network model inference is set to 50 ms. The method comprises the following specific steps:
the first step is as follows: constructing a hierarchical analysis structure influencing the evaluation of the target classification model;
the hierarchical analysis structure comprises an index layer and a factor layer, and the factor set influencing the final evaluation of the model is
Figure 455347DEST_PATH_IMAGE081
The index layer comprises classification accuracy and model reasoning speed; the factor set comprises the classification accuracy of each classification model to different classification categories
Figure 739698DEST_PATH_IMAGE082
And each model inference time
Figure 152225DEST_PATH_IMAGE083
As shown in table 1:
TABLE 1
Figure 914644DEST_PATH_IMAGE084
Step two, confirmTarget classification models to be evaluated, each classification model constituting a solution set
Figure 33910DEST_PATH_IMAGE085
The target classification model specifically comprises: mobilenet, darknet19 and darknet53 models.
Step three, calculating a weight vector of the grey intuitive fuzzy evaluation model
Figure 489162DEST_PATH_IMAGE086
Figure 388985DEST_PATH_IMAGE087
(1)
Wherein the content of the first and second substances,
Figure 955096DEST_PATH_IMAGE088
is shown as
Figure 991185DEST_PATH_IMAGE089
The weight of each of the evaluation factors is,
Figure 617338DEST_PATH_IMAGE090
to represent
Figure 4457DEST_PATH_IMAGE091
Is determined by using an analytic hierarchy process
Figure 43433DEST_PATH_IMAGE086
The weight part of (2);
determination by means of analytic hierarchy process
Figure 199608DEST_PATH_IMAGE086
The weight portion of (2).
And comparing the contents of the index layer and the factor layer in pairs respectively according to the rules given in the table 2 to obtain an index layer comparison matrix and a factor layer comparison matrix. According to the comparison rule and experience, the factor layer comparison matrix is set as the identity matrix. And respectively calculating a weight vector corresponding to each comparison matrix of the index layer and the factor layer according to the comparison matrix of the index layer and the factor layer, and then further calculating the final weight vector of each factor in the factor layer.
The comparison matrix needs to satisfy a consistency check by calculation
Figure 996662DEST_PATH_IMAGE092
By the value of (a) in (b),
Figure 605498DEST_PATH_IMAGE093
wherein, in the step (A),
Figure 778991DEST_PATH_IMAGE094
is an index of the consistency of the data,
Figure 789672DEST_PATH_IMAGE095
is an average consistency index when
Figure 757628DEST_PATH_IMAGE096
And (4) considering the matrix to be satisfactory, and if the condition is not met, indicating that the matrix does not meet the requirement, determining again.
TABLE 2
Scale Means of
1 Both indices have the same importance
3 The former being slightly more important than the latter
5 The former of the two indexes is greater than the latterOf obvious importance
7 The former is more important than the latter
9 The former is extremely important than the latter
2,4,6,8 Intermediate value of the above-mentioned adjacent judgment
Reciprocal of the If the index is
Figure 791443DEST_PATH_IMAGE097
And an index
Figure 237468DEST_PATH_IMAGE098
Is of importance in
Figure 368235DEST_PATH_IMAGE099
Then index
Figure 507092DEST_PATH_IMAGE098
And an index
Figure 90520DEST_PATH_IMAGE097
Is of importance in
Figure 340236DEST_PATH_IMAGE100
Specifically, for two indexes of classification accuracy and model inference speed, a comparison matrix is obtained as follows:
Figure 325510DEST_PATH_IMAGE101
Figure 572951DEST_PATH_IMAGE102
Figure 643676DEST_PATH_IMAGE103
1 5
Figure 697082DEST_PATH_IMAGE102
1/5 1
performing matrix operation, and checking consistency to obtain the final weight of six factors of the factor layer as follows:
Figure 536862DEST_PATH_IMAGE104
(ii) a The weight gray scale value is
Figure 17522DEST_PATH_IMAGE105
Step four, calculating an evaluation matrix of the grey intuitive fuzzy evaluation model
Figure 309963DEST_PATH_IMAGE012
Figure 104744DEST_PATH_IMAGE013
(2)
The grey intuitive fuzzy evaluation matrix is defined in space
Figure 799030DEST_PATH_IMAGE014
In (1),
Figure 450592DEST_PATH_IMAGE015
represents a set of factors that are representative of,
Figure 495908DEST_PATH_IMAGE016
represents a set of protocols in which
Figure 891117DEST_PATH_IMAGE017
Represents an intuitive fuzzy number that is,
Figure 439910DEST_PATH_IMAGE018
representing the gray scale of the point, evaluating the matrix by the gray intuitive fuzzy number
Figure 527952DEST_PATH_IMAGE019
Forming;
the method for determining the intuitive fuzzy number comprises the following steps: given a
Figure 732668DEST_PATH_IMAGE020
Each evaluation subject gives the scoring value of each scheme under each factor to obtain
Figure 665989DEST_PATH_IMAGE020
Scoring matrixes, performing dimensionless transformation on each scoring matrix, and dividing the dimensionless transformed scoring interval into three sub-matrixesThe sets respectively represent three attitudes of subjective support, neutral and object,
Figure 334868DEST_PATH_IMAGE020
the probability that the group score falls within the three subsets is the membership of the scheme to that factor
Figure 593811DEST_PATH_IMAGE021
Degree of hesitation
Figure 82561DEST_PATH_IMAGE022
And degree of non-membership
Figure 85152DEST_PATH_IMAGE023
The calculation method of the intuitive fuzzy number and the point gray scale is as follows:
1) the intuitive fuzzy number calculation method for the classification accuracy of different classification categories is as follows:
the target classification model uses a Softmax function to output and map classification results to probability values P of each possible classification, after dimensionless transformation, for the class with the label of 1, the result after dimensionless transformation is still P, for the class with the label of 0, the result after dimensionless transformation is 1-P, and the result after dimensionless transformation all falls into the probability values P of each possible classification
Figure 546221DEST_PATH_IMAGE056
Within the interval, the interval
Figure 710486DEST_PATH_IMAGE056
Is divided into
Figure 952111DEST_PATH_IMAGE057
Figure 492814DEST_PATH_IMAGE058
Figure 870706DEST_PATH_IMAGE059
Three subsets of which
Figure 409135DEST_PATH_IMAGE057
Which represents that the result of the classification is wrong,
Figure 872477DEST_PATH_IMAGE058
the tendency of the representative classification result is poor,
Figure 216871DEST_PATH_IMAGE059
representing that the classification result is correct; counting the classification result of each classification model in the test data set
Figure 714848DEST_PATH_IMAGE060
For the factor
Figure 158599DEST_PATH_IMAGE106
Can be given a degree of membership of
Figure 374816DEST_PATH_IMAGE062
To obtain the result of the above-mentioned method,
Figure 257322DEST_PATH_IMAGE063
represents the total number of samples tested and,
Figure 609806DEST_PATH_IMAGE064
representing the number of samples with correct classification result, i.e. the output falls on
Figure 221528DEST_PATH_IMAGE059
And (4) the same method is adopted to obtain the non-membership degree and the hesitation degree of the intuitive fuzzy number.
2) The intuitive fuzzy number calculation method of the model inference speed comprises the following steps:
the upper bound of the inference time of each model is
Figure 659463DEST_PATH_IMAGE107
Model (C)
Figure 345659DEST_PATH_IMAGE108
The time consumption of reasoning is
Figure 552649DEST_PATH_IMAGE109
Model of law
Figure 603782DEST_PATH_IMAGE108
Membership to inferred velocity index
Figure 529013DEST_PATH_IMAGE110
Degree of non-membership
Figure 753321DEST_PATH_IMAGE111
Figure 80397DEST_PATH_IMAGE112
Figure 302431DEST_PATH_IMAGE113
Wherein j =1, …,3;
3) the point gray scale calculation method for the classification accuracy of different classification categories is as follows:
counting a test data set, dividing the test data into a known scene data set and an unknown scene data set, and dividing the types influencing the completeness of sample information into: the number of cameras shot by the test data set, the number of shooting scenes under each camera, the number of samples under each shooting scene, and the expected values are respectively
Figure 449378DEST_PATH_IMAGE114
Figure 477377DEST_PATH_IMAGE115
Figure 658960DEST_PATH_IMAGE116
And expect values
Figure 51895DEST_PATH_IMAGE114
Figure 951718DEST_PATH_IMAGE115
Figure 517828DEST_PATH_IMAGE116
Not less than the maximum statistical value of the test data, as shown in table 3:
TABLE 3
Figure 553918DEST_PATH_IMAGE117
In this embodiment:
Figure 117754DEST_PATH_IMAGE118
Figure 239294DEST_PATH_IMAGE119
Figure 609095DEST_PATH_IMAGE120
test data set, model
Figure 765270DEST_PATH_IMAGE121
The number of the same distribution scene samples participating in training for different classes is
Figure 500008DEST_PATH_IMAGE122
The number of samples of different distribution scene data not participating in training is
Figure 108844DEST_PATH_IMAGE123
Model of law
Figure 16757DEST_PATH_IMAGE121
For the factor
Figure 27438DEST_PATH_IMAGE124
The dot gray scale of (a) is as follows:
Figure 933077DEST_PATH_IMAGE125
4) the point gray scale of the model inference speed is 0, namely
Figure 29209DEST_PATH_IMAGE126
Step five, analyzing and evaluating matrix
Figure 740813DEST_PATH_IMAGE127
Screening out positive and negative optimal solutions, and calculating to obtain the second
Figure 871581DEST_PATH_IMAGE128
The incidence matrix of the classification model and the positive optimal solution and
Figure 948121DEST_PATH_IMAGE128
the incidence matrix of each classification model and the negative optimal solution specifically comprises the following steps:
the positive optimal solution is
Figure 265970DEST_PATH_IMAGE129
Wherein
Figure 781265DEST_PATH_IMAGE130
The negative optimal solution is
Figure 766538DEST_PATH_IMAGE131
Wherein
Figure 11050DEST_PATH_IMAGE132
Wherein the content of the first and second substances,
Figure 816195DEST_PATH_IMAGE133
is determined by a scoring function and,
Figure 869602DEST_PATH_IMAGE134
the size of (A) is directly calculated.
First, the
Figure 647065DEST_PATH_IMAGE128
The incidence matrix of each classification model and the positive optimal solution is as follows:
Figure 127725DEST_PATH_IMAGE135
first, the
Figure 685745DEST_PATH_IMAGE128
The incidence matrix of each classification model and the negative optimal solution is as follows:
Figure 277264DEST_PATH_IMAGE136
first, the
Figure 174812DEST_PATH_IMAGE128
The relative relevance of each classification model and the positive optimal solution is as follows:
Figure 826374DEST_PATH_IMAGE137
(3)
first, the
Figure 606111DEST_PATH_IMAGE128
The relative relevance of each classification model and the negative optimal solution is as follows:
Figure 939003DEST_PATH_IMAGE138
(4)
wherein:
Figure 753375DEST_PATH_IMAGE139
referred to as the gray-scale modulus of the weight,
Figure 575838DEST_PATH_IMAGE140
(5)
Figure 842871DEST_PATH_IMAGE141
(6)
Figure 41771DEST_PATH_IMAGE038
to represent
Figure 710650DEST_PATH_IMAGE039
And
Figure 641697DEST_PATH_IMAGE040
the degree of separation of the grey intuitive blur numbers of,
Figure 396026DEST_PATH_IMAGE041
to represent
Figure 336301DEST_PATH_IMAGE042
And
Figure 859686DEST_PATH_IMAGE043
grey intuitive fuzzy degrees of separation of (a);
the separation degree calculation method of the gray intuitive blur is as follows:
for two gray intuitive blur numbers
Figure 961634DEST_PATH_IMAGE044
Balance of
Figure 203260DEST_PATH_IMAGE045
Is the degree of separation of the grey intuitive blur numbers, wherein,
Figure 681645DEST_PATH_IMAGE046
hamming distance that is an intuitive fuzzy number;
step six, calculating
Figure 59537DEST_PATH_IMAGE047
Relative closeness of candidate solutions
Figure 618474DEST_PATH_IMAGE048
Figure 81816DEST_PATH_IMAGE049
Wherein the content of the first and second substances,
Figure 426210DEST_PATH_IMAGE050
and
Figure 596291DEST_PATH_IMAGE051
in order to be a preference coefficient,
Figure 305621DEST_PATH_IMAGE142
Figure 692478DEST_PATH_IMAGE053
the larger the candidate, the better the candidate.
In this embodiment, specifically, take
Figure 574983DEST_PATH_IMAGE143
And calculating to obtain:
Figure 927467DEST_PATH_IMAGE144
Figure 807698DEST_PATH_IMAGE145
namely the darknet53 model is the optimal model.
The foregoing is merely exemplary and illustrative of the present invention and modifications, additions or substitutions in similar fashion to the specific embodiments described may be made by those skilled in the art without departing from the scope of the present invention.

Claims (4)

1. The evaluation method for identifying the target classification model of the unmanned aerial vehicle model is characterized by being based on gray intuitive fuzzy and specifically comprising the following steps of:
step one, constructing a hierarchical analysis structure influencing target classification model evaluation:
the hierarchical analysis structure comprises an index layer and a factor layer, and the factor set influencing the final evaluation of the model is
Figure 410455DEST_PATH_IMAGE001
The index layer comprises classification accuracy and model reasoning speed, and the factor set comprises the classification accuracy of each classification model to different classification categories
Figure 925750DEST_PATH_IMAGE002
And each model inference time
Figure 97974DEST_PATH_IMAGE003
Step two, determining a target classification model to be evaluated, wherein each classification model forms a scheme set
Figure 345416DEST_PATH_IMAGE004
Step three, calculating a weight vector of the grey intuitive fuzzy evaluation model
Figure 150561DEST_PATH_IMAGE005
Figure 390918DEST_PATH_IMAGE006
(1)
Wherein the content of the first and second substances,
Figure 230699DEST_PATH_IMAGE007
is shown as
Figure 649042DEST_PATH_IMAGE008
The weight of each of the evaluation factors is,
Figure 207062DEST_PATH_IMAGE009
to represent
Figure 798580DEST_PATH_IMAGE010
Is determined by using an analytic hierarchy process
Figure 945397DEST_PATH_IMAGE011
The weight part of (2);
step four, calculating an evaluation matrix of the grey intuitive fuzzy evaluation model
Figure 596958DEST_PATH_IMAGE012
Figure 314378DEST_PATH_IMAGE013
(2)
The grey intuitive fuzzy evaluation matrix is defined in space
Figure 709587DEST_PATH_IMAGE014
In (1),
Figure 713840DEST_PATH_IMAGE015
represents a set of factors that are representative of,
Figure 536302DEST_PATH_IMAGE016
represents a set of protocols in which
Figure 741019DEST_PATH_IMAGE017
Represents an intuitive fuzzy number that is,
Figure 939919DEST_PATH_IMAGE018
representing the gray scale of the point, evaluating the matrix by the gray intuitive fuzzy number
Figure 795748DEST_PATH_IMAGE019
Forming;
the method for determining the intuitionistic fuzzy number comprises the following steps: given a
Figure 789112DEST_PATH_IMAGE020
Each evaluation subject gives the scoring value of each scheme under each factor to obtain
Figure 481125DEST_PATH_IMAGE020
A scoring matrix, each score being followed byCarrying out dimensionless transformation on the submatrix, dividing the scoring interval after the dimensionless transformation into three subsets which respectively represent three attitudes of subjective support, neutral and objection,
Figure 483716DEST_PATH_IMAGE020
the probability that the group score falls within the three subsets is the membership of the scheme to that factor
Figure 194052DEST_PATH_IMAGE021
Degree of hesitation
Figure 358317DEST_PATH_IMAGE022
And degree of non-membership
Figure 599942DEST_PATH_IMAGE023
Step five, analyzing and evaluating matrix
Figure 78328DEST_PATH_IMAGE024
Screening out positive optimal solutions and negative optimal solutions, and calculating to obtain incidence matrixes of the candidate schemes and the positive optimal solutions and incidence matrixes of the candidate schemes and the negative optimal solutions, wherein the method specifically comprises the following steps:
the healthy and optimal solution is
Figure 456220DEST_PATH_IMAGE025
Wherein
Figure 243916DEST_PATH_IMAGE026
The negative optimal solution is
Figure 707259DEST_PATH_IMAGE027
Wherein
Figure 51652DEST_PATH_IMAGE028
The incidence matrix of the candidate scheme and the positive optimal solution is as follows:
Figure 221734DEST_PATH_IMAGE029
the incidence matrix of the candidate scheme and the negative optimal solution is as follows:
Figure 993380DEST_PATH_IMAGE030
first, the
Figure 133899DEST_PATH_IMAGE031
The relative relevance of each candidate scheme to the positive optimal solution is as follows:
Figure 16405DEST_PATH_IMAGE032
(3)
first, the
Figure 306572DEST_PATH_IMAGE031
The relative relevance of each candidate scheme to the negative optimal solution is as follows:
Figure 249120DEST_PATH_IMAGE033
(4)
wherein:
Figure 874005DEST_PATH_IMAGE034
gray scale model called weight
Figure 560201DEST_PATH_IMAGE035
(5)
Figure 704875DEST_PATH_IMAGE036
(6)
Figure 5275DEST_PATH_IMAGE037
To represent
Figure 930506DEST_PATH_IMAGE038
And
Figure 154814DEST_PATH_IMAGE039
the degree of separation of the grey intuitive blur numbers of,
Figure 419573DEST_PATH_IMAGE040
to represent
Figure 438345DEST_PATH_IMAGE041
And
Figure 37822DEST_PATH_IMAGE042
grey intuitive fuzzy degrees of separation of (a);
the separation degree calculation method of the gray intuitive blur is as follows:
for two gray intuitive blur numbers
Figure 65821DEST_PATH_IMAGE043
Balance of
Figure 185087DEST_PATH_IMAGE044
Is the degree of separation of the grey intuitive blur numbers, wherein,
Figure 640339DEST_PATH_IMAGE045
hamming distance, which is an intuitive fuzzy number;
step six, calculating
Figure 274582DEST_PATH_IMAGE046
Relative closeness of candidate solutions
Figure 42292DEST_PATH_IMAGE047
Figure 343960DEST_PATH_IMAGE048
Wherein the content of the first and second substances,
Figure 907797DEST_PATH_IMAGE049
and
Figure 29337DEST_PATH_IMAGE050
in order to be a preference coefficient,
Figure 586089DEST_PATH_IMAGE051
Figure 742264DEST_PATH_IMAGE052
the larger the candidate, the better the candidate.
2. The method of claim 1, wherein the determination in step three is determined by using an analytic hierarchy process
Figure 477002DEST_PATH_IMAGE053
The weight portion of (2).
3. The method according to claim 1, wherein the fourth step is a gray intuitive fuzzy number
Figure 85837DEST_PATH_IMAGE054
Is determined by a score function.
4. The method according to claim 1, wherein in the fourth step, the calculation method of the intuitive blur number and the point gray scale is as follows:
1) the intuitive fuzzy number calculation method for the classification accuracy of different classification categories is as follows:
the target classification model uses a Softmax function to output and map the classification result to probability value P of each possible classification, and after dimensionless transformation, the dimensionless transformation is carried out on the classification with the label of 1The result is still P, for the class labeled 0, the result after dimensionless transformation is 1-P, and the result after dimensionless transformation falls on
Figure 993751DEST_PATH_IMAGE055
Within the interval, the interval
Figure 191383DEST_PATH_IMAGE055
Is divided into
Figure 97022DEST_PATH_IMAGE056
Figure 193154DEST_PATH_IMAGE057
Figure 904758DEST_PATH_IMAGE058
Three subsets of which
Figure 956896DEST_PATH_IMAGE056
Which represents that the result of the classification is wrong,
Figure 95754DEST_PATH_IMAGE057
the tendency of the representative classification result is poor,
Figure 616865DEST_PATH_IMAGE058
representing that the classification result is correct; counting the classification result of each classification model in the test data set
Figure 132160DEST_PATH_IMAGE059
For the factor
Figure 41734DEST_PATH_IMAGE060
Can be given a degree of membership of
Figure 351493DEST_PATH_IMAGE061
To obtain the result of the above-mentioned method,
Figure 422217DEST_PATH_IMAGE062
represents the total number of samples tested and,
Figure 413307DEST_PATH_IMAGE063
representing the number of samples with correct classification result, i.e. the output falls on
Figure 253087DEST_PATH_IMAGE058
The interval adopts the same method to calculate the non-membership degree and the hesitation degree of the intuitive fuzzy number;
2) the intuitive fuzzy number calculation method of the model inference speed comprises the following steps:
the upper bound of the inference time of each model is
Figure 920698DEST_PATH_IMAGE064
Model (C)
Figure 478718DEST_PATH_IMAGE065
The time consumption of reasoning is
Figure 7919DEST_PATH_IMAGE066
Model of law
Figure 702206DEST_PATH_IMAGE065
Membership to inferred velocity index
Figure 619346DEST_PATH_IMAGE067
Degree of non-membership
Figure 586034DEST_PATH_IMAGE068
Figure 918926DEST_PATH_IMAGE069
Figure 733299DEST_PATH_IMAGE070
Wherein j =1, …, n;
3) the point gray scale calculation method for the classification accuracy of different classification categories is as follows:
counting a test data set, dividing the test data into a known scene data set and an unknown scene data set, and dividing the types influencing the completeness of sample information into: the number of cameras shot by the test data set, the number of shooting scenes under each camera, the number of samples under each shooting scene, and the expected values are respectively
Figure 555761DEST_PATH_IMAGE071
Figure 9745DEST_PATH_IMAGE072
Figure 208645DEST_PATH_IMAGE073
And expected value
Figure 815207DEST_PATH_IMAGE071
Figure 808571DEST_PATH_IMAGE072
Figure 752781DEST_PATH_IMAGE073
Not less than the maximum statistical value of the test data;
test data set, model
Figure 489793DEST_PATH_IMAGE065
For different categories
Figure 950861DEST_PATH_IMAGE074
The number of the same distribution scene samples participating in the training is
Figure 380705DEST_PATH_IMAGE075
Am, amThe number of samples of different distribution scene data participating in training is
Figure 809281DEST_PATH_IMAGE076
Model of law
Figure 84405DEST_PATH_IMAGE065
For the factor
Figure 665559DEST_PATH_IMAGE077
The dot gray scale of (a) is as follows:
Figure 725DEST_PATH_IMAGE078
wherein i =1, …, m-1; j =1, … n;
4) the point gray scale value of the model inference speed index is 0, namely
Figure 729647DEST_PATH_IMAGE079
CN202210765323.4A 2022-07-01 2022-07-01 Evaluation method of target classification model for identifying model of unmanned aerial vehicle Pending CN114841082A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210765323.4A CN114841082A (en) 2022-07-01 2022-07-01 Evaluation method of target classification model for identifying model of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210765323.4A CN114841082A (en) 2022-07-01 2022-07-01 Evaluation method of target classification model for identifying model of unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN114841082A true CN114841082A (en) 2022-08-02

Family

ID=82574813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210765323.4A Pending CN114841082A (en) 2022-07-01 2022-07-01 Evaluation method of target classification model for identifying model of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN114841082A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578913A (en) * 2023-03-31 2023-08-11 中国人民解放军陆军工程大学 Reliable unmanned aerial vehicle detection and recognition method oriented to complex electromagnetic environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116578913A (en) * 2023-03-31 2023-08-11 中国人民解放军陆军工程大学 Reliable unmanned aerial vehicle detection and recognition method oriented to complex electromagnetic environment
CN116578913B (en) * 2023-03-31 2024-05-24 中国人民解放军陆军工程大学 Reliable unmanned aerial vehicle detection and recognition method oriented to complex electromagnetic environment

Similar Documents

Publication Publication Date Title
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN109961089A (en) Small sample and zero sample image classification method based on metric learning and meta learning
CN106022220A (en) Method for performing multi-face tracking on participating athletes in sports video
CN111582337A (en) Strawberry malformation state detection method based on small sample fine-grained image analysis
CN110738271B (en) Concentrate grade prediction method in zinc flotation process
CN111062423B (en) Point cloud classification method of point cloud graph neural network based on self-adaptive feature fusion
CN114529819A (en) Household garbage image recognition method based on knowledge distillation learning
CN111932540B (en) CT image contrast characteristic learning method for clinical typing of new coronary pneumonia
CN112308825B (en) SqueezeNet-based crop leaf disease identification method
CN110826624A (en) Time series classification method based on deep reinforcement learning
CN114841082A (en) Evaluation method of target classification model for identifying model of unmanned aerial vehicle
CN112819821A (en) Cell nucleus image detection method
CN114463843A (en) Multi-feature fusion fish abnormal behavior detection method based on deep learning
CN115050022A (en) Crop pest and disease identification method based on multi-level self-adaptive attention
CN114444389A (en) Air attack target dynamic threat assessment method based on combined empowerment and improved VIKOR
CN113076969A (en) Image target detection method based on Gaussian mixture loss function
CN110852574B (en) Target threat assessment method and medium based on improved grey target theory
CN112861871A (en) Infrared target detection method based on target boundary positioning
CN117195027A (en) Cluster weighted clustering integration method based on member selection
CN113128556B (en) Deep learning test case sequencing method based on mutation analysis
CN114947751A (en) Mobile terminal intelligent tongue diagnosis method based on deep learning
CN113988163A (en) Radar high-resolution range profile identification method based on multi-scale grouping fusion convolution
Jayaram et al. A brief study on rice diseases recognition and image classification: Fusion deep belief network and S-particle swarm optimization algorithm
Tang et al. Classification of Chinese Herbal Medicines by deep neural network based on orthogonal design
Ushio et al. The application of deep learning to predict corporate growth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220802

RJ01 Rejection of invention patent application after publication