CN113989553B - Evidence modeling method and system based on mixed sample density estimation for image mode classification - Google Patents

Evidence modeling method and system based on mixed sample density estimation for image mode classification Download PDF

Info

Publication number
CN113989553B
CN113989553B CN202111243672.1A CN202111243672A CN113989553B CN 113989553 B CN113989553 B CN 113989553B CN 202111243672 A CN202111243672 A CN 202111243672A CN 113989553 B CN113989553 B CN 113989553B
Authority
CN
China
Prior art keywords
mixed
probability density
image
focal
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111243672.1A
Other languages
Chinese (zh)
Other versions
CN113989553A (en
Inventor
韩德强
董博
李威
严恒良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Research Institute Of Casic
Xian Jiaotong University
Original Assignee
Second Research Institute Of Casic
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Research Institute Of Casic, Xian Jiaotong University filed Critical Second Research Institute Of Casic
Priority to CN202111243672.1A priority Critical patent/CN113989553B/en
Publication of CN113989553A publication Critical patent/CN113989553A/en
Application granted granted Critical
Publication of CN113989553B publication Critical patent/CN113989553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image pattern classification-oriented evidence modeling method and system based on mixed sample density estimation, wherein the method comprises the following steps: acquiring one-dimensional vector image characteristics of the spliced image; constructing a mixed sample belonging to the mixed coke element; estimating probability density functions corresponding to the mixed focal elements on all attributes and probability density functions of the single-point focal elements based on the established mixed samples; based on the obtained probability density function, the probability density value corresponding to the probability density function of the sample to be modeled on each focal element is obtained, and the probability density values of the sample to be modeled on all the focal elements are normalized to generate a basic confidence distribution function. The method can directly generate the BBA function of the mixed focal element, can more intuitively model the mixed focal element for evidence, and can more effectively improve the performance of the mixed focal element when being applied to image identification and classification based on evidence combination.

Description

Evidence modeling method and system based on mixed sample density estimation for image mode classification
Technical Field
The invention belongs to the technical field of pattern classification, relates to an evidence modeling method for mixed focal elements, and particularly relates to an evidence modeling method and system based on mixed sample density estimation for image pattern classification.
Background
Pattern classification (Pattern Classification) is a process of mathematically processing information characterizing a thing or phenomenon using a computer, and classifying and interpreting the thing or phenomenon. Pattern classification is widely used as an important research field of information science in the fields of computer vision, natural language processing, data mining, biomedicine and the like.
The conventional general process of image pattern classification comprises the steps of digitizing and preprocessing image information, extracting characteristics, designing a classifier and evaluating the classifier, wherein the method comprises a statistical recognition method, a structure recognition method, a fuzzy recognition method and an intelligent recognition method. With the deep research of image mode classification, more and more excellent classification algorithms, such as decision tree classification algorithm, BP neural network classification algorithm, support vector machine classification algorithm and the like, appear, and the algorithms well improve the level of image mode classification. However, due to the limitation of a single sensor, a single feature input by the classifier cannot be qualified for classification tasks, and the result often has a certain uncertainty, namely, the category attribution of things or phenomena cannot be accurately described.
Based on the analysis and expression problems, the mode classification method based on evidence combination utilizes the thought of information fusion, utilizes the characteristics provided by multi-source information to carry out evidence modeling on different characteristics, carries out fusion through evidence combination rules, eliminates or suppresses uncertainty of single characteristic mode classification through complementary integration of the corresponding information of the characteristics, and is weak to strong so as to improve recognition accuracy, and the theoretical basis of the mode classification method based on evidence combination is D-S evidence theory.
Dempster-Shafer (DS) evidence theory is an important mathematical framework for uncertainty modeling and reasoning, and is applied to the fields of information fusion, pattern classification, multi-attribute decision and the like. In the D-S evidence theory framework, the generation of the basic belief allocation (Basic Belief Assignment, BBA) function corresponds to uncertainty modeling. The BBA function is essentially a set-valued random variable, so its generation problem is essentially a distributed modeling, that is, an evidence modeling problem, of the set-valued random variable, and thus, generation of the BBA function is still a difficult problem at present. For each BBA function generated by different sensors or different features, final fusion results can be obtained by fusing by adopting a Dempster combination rule, so that whether the generation mode of the BBA function, namely the mode of evidence modeling, reasonably and directly influences the accuracy of the subsequent fusion and classification decision results.
In the existing evidence modeling method, such as a method based on a triangular fuzzy number model, the method firstly constructs the triangular fuzzy number model aiming at the categories corresponding to all single-point focal elements, then calculates the similarity measurement between fuzzy numbers and normalizes the similarity measurement to generate an evidence function, wherein the similarity measurement of mixed focal elements is indirectly constructed through the intersection part of the single-point focal element models; in the method based on the bell-shaped function, the bell-shaped function is used as a model for constructing similarity evaluation, the method obtains the similarity of the mixed focal elements through the intersection part of the single-point focal element model, and then the evidence modeling is carried out on the mixed focal elements; in the evidence modeling method based on the interval number, the similarity between the interval numbers is measured by applying the distance of the interval number, so that the basic probability assignment of the proposition is obtained. Evidence functions describe the advantages of uncertainty in the introduction of mixed propositions, namely mixed focal elements, so that the modeling of uncertainty of mixed focal elements is of great importance.
In the existing evidence modeling method, modeling is directly performed on single-point focal elements, the evidence function of the composite focal elements is indirectly obtained by adopting a heuristic method according to assignment of the single-point focal elements, objectivity and intuitiveness of classification uncertainty description are lacking, and although the interval number-based method obtains the interval number distance and the evidence function for mixed focal elements, description of the mixed focal elements is actually based on uniform distribution assumption, and classification uncertainty cannot be objectively and accurately described. Reasonable generation of the composite focal element and the evidence function thereof is crucial to fully utilizing the advantages of the D-S evidence theory, namely the capability of representing the ambiguity, but the existing method cannot directly model the mixed focal element, which affects visual representation of the ambiguity of the mixed focal element. In summary, a new evidence modeling method and system based on the density estimation of the ambiguity samples for image pattern classification is needed.
Disclosure of Invention
The invention aims to provide an image pattern classification-oriented evidence modeling method and system based on mixed sample density estimation, which are used for solving one or more technical problems. Aiming at the defect that the existing evidence modeling method does not directly model the mixed focal element, the invention particularly provides a novel evidence modeling method based on mixed sample density estimation for image mode classification, which can directly generate a BBA function of the mixed focal element, can more intuitively model the mixed focal element, and can more effectively improve the performance of the mixed focal element when being applied to image identification classification based on evidence combination.
In order to achieve the above purpose, the invention adopts the following technical scheme:
the invention discloses an image mode classification-oriented evidence modeling method based on mixed sample density estimation, which comprises the following steps of:
acquiring multi-source images, classifying the image categories, splicing the images classified into the same image category, and acquiring one-dimensional vector image characteristics of the spliced images;
defining each image category as a single-point focal element based on the one-dimensional vector image characteristics, and defining a plurality of image category sets as mixed focal elements; constructing an ambiguity sample belonging to the mixed focal element through the intersection or union operation among all samples of a plurality of image categories corresponding to each mixed focal element;
estimating probability density functions corresponding to the mixed focal elements on all attributes through a mixed Gaussian model or a generated countermeasure network based on the established mixed samples; estimating a probability density function of the single-point focal element through a single Gaussian model or generating an countermeasure network;
based on the obtained probability density function, solving a probability density value corresponding to the probability density function of the sample to be modeled on each focal element, normalizing the probability density values of the sample to be modeled on all the focal elements to generate a basic credibility distribution function, and completing evidence modeling.
The method is further improved in that the steps of acquiring the multi-source images, classifying the image categories, splicing the images classified into the same image category, and acquiring the one-dimensional vector image characteristics of the spliced images specifically comprise the following steps:
acquiring multi-source images through three image acquisition sensors, namely an optical image acquisition sensor, an infrared image acquisition sensor and a radar image acquisition sensor, classifying the multi-source images according to preset image categories, and splicing images in the same image category to acquire spliced images; and extracting deep abstract features by using a convolutional neural network model based on the spliced images, and converting the image features into one-dimensional vector image features.
A further improvement of the method according to the invention is that the predetermined image categories comprise a predetermined number of all-vehicle categories, a predetermined number of all-aircraft categories, and a predetermined number of vehicle-aircraft hybrid categories.
A further improvement of the method of the invention is that the mixed samples belonging to the mixed focal elements are constructed by the inter-operation or the union operation between all samples of the plurality of image categories corresponding to each mixed focal element,
the intersection operation classifies all samples of the mixed focal element including the image category by using a Bayes decision basic classifier, and classifies the samples with wrong classification as mixed samples;
The sum operation classifies all samples that contain image categories for the mixed focal element as ambiguous samples.
The method is further improved in that the probability density function corresponding to the mixed focal element on each attribute is estimated through a mixed Gaussian model or a generated countermeasure network based on the mixed sample obtained by construction; in estimating the probability density function of the single focus element by a single Gaussian model or generating an countermeasure network,
for single point focal element { θ ] i The computational expression of the single gaussian model is,
wherein,represents a single point focal element { θ } i Probability density function on property j, +.>Sum sigma ij Respectively the average value and standard deviation of the samples of the class i on the attribute j;
for the mixed focal element H, the calculation expression of the mixed Gaussian model is as follows,
wherein,probability density function representing mixed bin H on property j +.>Gaussian model, pi, representing class t t Representation->Weights of (2);
parameters (parameters)And pi t The calculated expression of (c) is that,
wherein n is t The number of samples of class t in the mixed samples corresponding to the mixed focal element H is represented, N represents the total number of the mixed samples corresponding to the mixed focal element H,sample representing belonging to category t>A value on attribute j.
The method is further improved in that the probability density function corresponding to the mixed focal element on each attribute is estimated through a mixed Gaussian model or a generated countermeasure network based on the mixed sample obtained by construction; in estimating the probability density function of the single focus element by a single Gaussian model or generating an countermeasure network,
The structure of the generated countermeasure network model is a generated model G and a discrimination model D, the calculation expression of the objective function is,
where x is the sampling of the distribution p data (x) Is sampled in a priori distribution p z False data of (z), E (·) represents expectations.
The method is further improved in that based on the obtained probability density function, the probability density value corresponding to the sample to be modeled on the probability density function of each focal element is obtained, the probability density values of the sample to be modeled on all the focal elements are normalized to generate a basic credibility distribution function, and in the evidence modeling,
normalizing probability density values of a sample to be built on all focal elements to generate a calculation expression of a basic credibility allocation function,
wherein m is j (C) BBA function, x representing focal element C on attribute j j The value of the test sample on attribute j,represents x j Probability density values in the PDF corresponding to focal element C.
The invention discloses an image mode classification-oriented evidence modeling system based on mixed sample density estimation, which comprises the following components:
the image and feature acquisition module is used for acquiring multi-source images, classifying the image categories, splicing the images classified into the same image category, and acquiring one-dimensional vector image features of the spliced images;
The mixed sample acquisition module is used for defining each image category as a single-point focal element and defining a plurality of image category sets as mixed focal elements based on the one-dimensional vector image characteristics; constructing an ambiguity sample belonging to the mixed focal element through the intersection or union operation among all samples of a plurality of image categories corresponding to each mixed focal element;
the probability density function acquisition module is used for estimating probability density functions corresponding to the mixed focal elements on all attributes through a mixed Gaussian model or a generated countermeasure network based on the established and obtained mixed samples; estimating a probability density function of the single-point focal element through a single Gaussian model or generating an countermeasure network;
the basic credibility distribution function acquisition module is used for solving probability density values corresponding to the probability density functions of the samples to be modeled on each focal element based on the obtained probability density functions, normalizing the probability density values of the samples to be modeled on all the focal elements to generate a basic credibility distribution function, and completing evidence modeling.
Compared with the prior art, the invention has the following beneficial effects:
aiming at the defect that the existing evidence modeling method does not directly model the mixed focal element, the invention provides a novel evidence modeling method based on mixed sample density estimation for image mode classification, which can directly generate BBA functions of the mixed focal element, more intuitively model the mixed focal element, and more effectively improve the performance of the mixed focal element when the mixed focal element is applied to image mode classification based on evidence combination. Specifically, the method constructs the ambiguity samples corresponding to the mixed focal elements through intersection operation or union operation, and then estimates the probability density function of each focal element through a mixed Gaussian model or a generated countermeasure network. The method can be used for guiding the design of the pattern classification system based on the evidence combination, and improving the accuracy of pattern classification based on the evidence combination.
The system can directly generate the evidence function of the mixed focal element, more intuitively carry out evidence modeling on the mixed focal element, and more effectively improve the performance of the mixed focal element when being applied to image mode classification based on evidence combination.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description of the embodiments or the drawings used in the description of the prior art will make a brief description; it will be apparent to those of ordinary skill in the art that the drawings in the following description are of some embodiments of the invention and that other drawings may be derived from them without undue effort.
FIG. 1 is a schematic illustration of a vehicle image sample obtained by an optical image sensor in an embodiment of the present invention;
FIG. 2 is a schematic illustration of a vehicle image sample obtained by an infrared image sensor in an embodiment of the present invention;
FIG. 3 is a schematic illustration of a vehicle image sample obtained by a radar image sensor in an embodiment of the present invention;
FIG. 4 is a schematic flow chart of probability density estimation and BBA function generation based on a Gaussian mixture model in an embodiment of the invention;
FIG. 5 is a schematic diagram of an architecture of a generated countermeasure network in an embodiment of the invention;
Fig. 6 is a flow chart of generating a BBA function based on generating a probability density estimate for an antagonizing network in an embodiment of the invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention is described in further detail below with reference to the attached drawing figures:
the embodiment of the invention discloses an image mode classification-oriented evidence modeling method based on mixed sample density estimation, which comprises the following steps of:
step 1, obtaining multi-source images through three image acquisition sensors of optics, infrared and radar, wherein the image categories are divided into all vehicles (5 types), all planes (5 types) and mixed identification (10 types) of the two; three images of the same category are spliced, deep abstract features of the images are extracted by using a Convolutional Neural Network (CNN) model, and the image features are converted into one-dimensional vectors;
step 2, defining each category as a single-point focal element based on the image characteristics obtained in the step 1; defining a set with a plurality of category elements as a mixed focal element (multi-category set proposition), and representing a sample with uncertain category attribution; constructing mixed samples belonging to mixed focal elements through the carrying-over or parallel operation among all samples of the corresponding categories of the elements; the mixed focal element classification method comprises the steps that a basic classifier (Bayesian decision) is used for classifying all samples of mixed focal element containing categories, and samples with wrong classification are classified into corresponding mixed samples; and calculating to classify all samples of the mixed focal element containing category as mixed samples;
Step 3, estimating Probability Density Functions (PDFs) corresponding to the mixed focal elements on all attributes through a mixed Gaussian model (Gaussian Mixture Model, GMM) or a generated countermeasure network (Generative Adversarial Network, GAN) based on the mixed samples constructed in the step 2; estimating PDF of the single-point focal element through a single Gaussian model or GAN;
and 4, based on the PDF obtained by calculation in the step 3, solving a probability density value corresponding to the test sample on the PDF of each focal element, normalizing the probability density values of the test sample on all the focal elements, thereby generating a basic credibility allocation (Basic Belief Assignment, BBA) function and completing evidence modeling.
In the embodiment of the invention, aiming at the defect that the existing evidence modeling method does not directly model the mixed focal element, a novel evidence modeling method based on mixed sample density estimation for image mode classification is provided, a BBA function of the mixed focal element can be directly generated, the evidence modeling can be carried out on the mixed focal element more intuitively, and the performance of the mixed focal element when the mixed focal element is applied to image mode classification based on evidence combination can be more effectively improved.
Based on the above embodiment, step 2 of the embodiment of the present invention specifically includes:
(1) Intersection operation
By mixing focal elements { θ } 12 For example, bayesian classification decision is made on all the samples belonging to class 1 and class 2 in the training set, and the samples are classifiedError-like samples are classified as { θ ] 12 A corresponding hybrid class sample;
(2) And operation
All samples belonging to class 1 and class 2 in the training set are classified as { θ ] 12 A corresponding hybrid class sample. And the operation is to classify all the class samples contained in the mixed focal element into mixed classes;
the intersection operation or the union operation directly constructs a mixed class sample corresponding to the mixed focal element (multi-class set proposition), and on the basis, the mixed focal element can be directly modeled; when the number of samples is small, the number of the mixed samples constructed by the intersection operation is insufficient, so that PDF estimation corresponding to the mixed focal element is inaccurate, the BBA generation and the fusion classification performance are affected, and the mixed samples are constructed by adopting the union operation. When the number of samples is sufficient, both the cross operation and the parallel operation can be adopted.
In the embodiment of the invention, the direct modeling of the mixed focal element corresponding to the classification ambiguity is beneficial to preserving the classification uncertainty, avoiding the information loss and laying a foundation for the subsequent fusion resolution uncertainty.
In step 2 of the embodiment of the invention, the mixed class sample corresponding to the mixed focal element (multi-class set proposition) is directly constructed through the intersection operation or the union operation, and the mixed focal element can be directly modeled on the basis.
In step 3 of the embodiment of the invention, (1) compared with a single Gaussian model, the mixed Gaussian model method has finer model, and can better fit probability density distribution of a mixed sample corresponding to a mixed focal element (multi-category set proposition); (2) the generation countermeasure network method optimizes the generator G and the discriminator D in an alternate optimization mode, and finally generates samples conforming to real data distribution through the generator G, so as to estimate the probability density of the ambiguity samples.
In step 4 of the embodiment of the present invention, in order to convert the probability density value into the basic confidence allocation (BBA), the probability density value of the test sample for each focal element is processed in a normalized manner.
Based on the above embodiment, step 3 of the embodiment of the present invention specifically includes:
for single point focal element { θ ] i The computational expression of the single gaussian model is,
wherein,represents a single point focal element { θ } i Probability density function on property j, +.>Sum sigma ij Respectively the average value and standard deviation of the samples of the class i on the attribute j;
for the mixed focal element H, the calculation expression of the mixed Gaussian model is as follows,
wherein,probability density function representing mixed bin H on property j +.>Gaussian model, pi, representing class t t Representation- >Weights, parameters-> And pi t The calculated expression of (c) is that,
wherein n is t The number of samples of class t in the mixed samples corresponding to the mixed focal element H is represented, N represents the total number of the mixed samples corresponding to the mixed focal element H,sample representing belonging to category t>A value on attribute j; compared with a single Gaussian model, the mixed Gaussian model is more complex, and probability density distribution of a mixed sample corresponding to mixed focal elements (multi-category set propositions) can be better fitted.
The structure of the GAN model is a generating model G and a discriminating model D, the generating model is aimed at learning the real data distribution and generating the false sample close to the real data, the discriminating model is aimed at classifying the real data and the false data, the optimizing problem of the GAN can be described as a maximum and minimum optimizing problem, the calculating expression of the objective function is that,
where x is the sampling of the distribution p data (x) Is sampled in a priori distribution p z False data of (z), E (·) represents expectations.
In the method of the embodiment of the invention, the performance of the generator G and the discriminator D is continuously improved in the process of mutual antagonism and iterative optimization by adopting an alternative optimization mode, and finally, the data generated by the generator G is close to the real data distribution, a large number of samples are generated by the generator G at the moment, and the PDF of the real data is estimated by a probability density estimation method of a histogram.
Based on the above embodiment, step 4 of the embodiment of the present invention specifically includes the following steps:
to convert the probability density values to basic confidence scores (BBA), the probability density values of the test samples over all bins are normalized to generate a basic confidence score (BBA) function, expressed in particular by,
wherein m is j (C) BBA function, x representing focal element C on attribute j j The value of the test sample on attribute j,represents x j Probability density values in the PDF corresponding to focal element C.
Aiming at the defect that the existing evidence modeling method does not directly model the mixed focal element, the embodiment of the invention provides a novel evidence modeling method based on mixed sample density estimation for image mode classification, which can directly generate a BBA function of the mixed focal element, more intuitively model the mixed focal element, and more effectively improve the performance of the mixed focal element when the mixed focal element is applied to image mode classification based on evidence combination. According to the invention, firstly, the mixed sample corresponding to the mixed focal element is constructed through intersection operation or union operation, then the probability density function of each focal element is estimated through a mixed Gaussian model or a generated countermeasure network, and the image mode classification-oriented evidence modeling system based on the mixed sample density estimation can effectively improve the image mode classification performance. The method can be used for guiding the design of the image pattern classification system based on evidence combination, and improving the accuracy of the image pattern classification based on evidence combination.
In the embodiment of the invention, the image recognition and classification effect under the actual complex scene is simulated, and the image recognition and classification effect can be divided into three images of optics, infrared and radar according to different image acquisition sensors; the image categories are classified into all vehicles (category 5), all airplanes (category 5), and partial hybrid identification of both (category 6). 50 images of each type are obtained by respectively carrying out rotation, scaling and transformation on the original acquired images; in the experiment, three images of the same category are spliced, deep abstract features of the images are extracted by using a Convolutional Neural Network (CNN) model, and the features of the images are converted into one-dimensional vectors for fusion classification experiments; for the density estimation method based on GMM, the fusion classification accuracy rate is in the vehicle data set, and can reach 0.9160; the fusion classification accuracy is in the aircraft data set and can reach 0.9020; the fusion classification accuracy rate is in the mixed data set and can reach 0.8764; for the GAN-based density estimation method, the fusion classification accuracy rate is in the vehicle data set and can reach 0.9160; the fusion classification accuracy is in the aircraft data set and can reach 0.9020; fusion classification accuracy can reach 0.8764 in the mixed dataset.
The embodiment of the invention provides a model-oriented evidence modeling system based on mixed sample density estimation, which comprises the following components:
The image acquisition and feature extraction module is used for acquiring the multi-source images and extracting feature vectors;
the mixed sample construction module is used for constructing mixed samples based on cross operation or parallel operation and representing the capability of mixed focal ambiguity;
the probability density function calculation module is used for calculating and obtaining the probability density function of each focal element based on the mixed sample of each mixed focal element constructed by the mixed sample construction module;
the evidence function generation module is used for calculating the probability density value of the test sample based on the probability density function of each focal element calculated by the probability density function calculation module and normalizing to generate a basic probability distribution (BBA) function;
in the image acquisition and feature extraction module, multi-source images are obtained through three image acquisition sensors of optics, infrared and radar, and the image categories are divided into all vehicles (5 types), all planes (5 types) and mixed identification (10 types) of the two; three images of the same category are spliced, deep abstract features of the images are extracted by using a Convolutional Neural Network (CNN) model, and the image features are converted into one-dimensional vectors; each category is corresponding to a single-point focal element; a set having a plurality of category elements is called a mixed focal element (multi-category set proposition) and represents a sample with uncertain category attribution;
In the mixed sample construction module, each category is correspondingly a single-point focal element; a set having a plurality of category elements is called a mixed focal element (multi-category set proposition) and represents a sample with uncertain category attribution; constructing mixed samples belonging to mixed focal elements through the carrying-over or parallel operation among all samples of the corresponding categories of the elements; the mixed focal element classification method comprises the steps that a basic classifier (Bayesian decision) is used for classifying all samples of mixed focal element containing categories, and samples with wrong classification are classified into corresponding mixed samples; and computes to classify all samples of the mixed focal element inclusion class as ambiguous samples.
In the probability density function calculation module, aiming at single-point focal element { theta } i The computational expression of the single gaussian model is,
wherein,represents a single point focal element { θ } i Probability density function on property j, +.>Sum sigma ij The average and standard deviation of the samples of class i over attribute j, respectively.
For the mixed focal element H, the calculation expression of the mixed Gaussian model is as follows,
wherein,probability density function representing mixed bin H on property j +.>Gaussian model, pi, representing class t t Representation->Weights, parameters-> And pi t The calculated expression of (c) is that,
Wherein n is t The number of samples of class t in the mixed samples corresponding to the mixed focal element H is represented, N represents the total number of the mixed samples corresponding to the mixed focal element H,sample representing belonging to category t>A value on attribute j; compared with a single Gaussian model, the mixed Gaussian model is more complex, and probability density distribution of a mixed sample corresponding to mixed focal elements (multi-category set propositions) can be better fitted.
The structure of the GAN model is a generating model G and a discriminating model D, the generating model is aimed at learning the real data distribution and generating the false sample close to the real data, the discriminating model is aimed at classifying the real data and the false data, the optimizing problem of the GAN can be described as a maximum and minimum optimizing problem, the calculating expression of the objective function is that,
where x is the sampling of the distribution p data (x) Is sampled in a priori distribution p z False data of (z), E (·) represents expectations; the method adopts an alternate optimization mode, so that the performance of the generator G and the discriminator D is continuously improved in the process of mutual antagonism and iterative optimization, and finally data generated by the generator G is close to real data distribution, a large number of samples are generated by the generator G at the moment, and PDF of the real data is estimated by a probability density estimation method of a histogram.
In the evidence function generation module, in order to convert the probability density value into basic reliability allocation (BBA), the probability density values of the test sample on all the focal elements are normalized to generate the basic reliability allocation (BBA) function, and a specific calculation expression is that,
wherein m is j (C) BBA function, x representing focal element C on attribute j j The value of the test sample on attribute j,represents x j Probability density values in the PDF corresponding to focal element C.
The embodiment of the invention discloses a evidence modeling method based on mixed sample density estimation for pattern classification, which comprises the following steps:
(1) Image acquisition and feature extraction.
Obtaining multi-source images according to three image acquisition sensors of optics, infrared and radar, wherein the image categories are classified into all vehicles (5 types), all planes (5 types) and mixed identification (10 types) of the two; three images of the same category are spliced, deep abstract features of the images are extracted by using a Convolutional Neural Network (CNN) model, and the image features are converted into one-dimensional vectors; each category is corresponding to a single-point focal element; a collection having a plurality of class elements is called a mixed focal element (multi-class collection proposition) and represents a sample in which class attribution is uncertain.
(2) And constructing the mixed sample based on the cross operation or the parallel operation.
Constructing mixed samples belonging to mixed focal elements through the carrying-over or parallel operation among all samples of the corresponding categories of the elements; the mixed focal element classification method comprises the steps that a basic classifier (Bayesian decision) is used for classifying all samples of mixed focal element containing categories, and samples with wrong classification are classified into corresponding mixed samples; and calculating to classify all samples of the mixed focal element containing category as mixed samples;
and (3) performing intersection operation: by mixing focal elements { θ } 12 For example, bayesian decision classification is performed on all samples belonging to class 1 and class 2 in the training set, and samples with wrong classification are classified into { theta } 12 A corresponding hybrid class sample;
and (3) carrying out a sum operation: all samples belonging to class 1 and class 2 in the training set are classified as { θ ] 12 A corresponding hybrid class sample. And the operation is to classify all the class samples contained in the mixed focal element into mixed class.
(3) Based on a mixture gaussian model or an evidence function that generates an antagonism network estimate.
After obtaining mixed samples of mixed focal elements, carrying out probability density estimation on samples corresponding to each focal element through a mixed Gaussian model or a generated countermeasure network, and normalizing probability density values of test samples to generate an evidence function;
the calculation expression of the mixed Gaussian model is that,
Wherein,probability density function representing mixed bin H on property j +.>Gaussian model, pi, representing class t t Representation->Weights, parameters-> And pi t The calculated expression of (c) is that,
wherein n is t The number of samples of class t in the mixed samples corresponding to the mixed focal element H is represented, N represents the total number of the mixed samples corresponding to the mixed focal element H,sample representing belonging to category t>Value on attribute j。
Aiming at the defect that the existing evidence modeling method does not directly model the mixed focal element, the embodiment of the invention provides a novel evidence modeling method based on mixed sample density estimation for image mode classification, which can directly generate a BBA function of the mixed focal element, more intuitively model the mixed focal element and more effectively improve the performance of the mixed focal element when the mixed focal element is applied to image mode classification based on evidence combination.
Referring to fig. 1, 2 and 3, the method according to the embodiment of the present invention acquires three different images by optical, infrared and radar image acquisition sensors; the image categories are divided into all vehicles (category 5), all planes (category 5) and part mixed identification (category 6) of the two; 50 images of each type are obtained by respectively carrying out rotation, scaling and transformation on the original acquired images; in the experiment, three images of the same category are spliced, deep abstract features of the images are extracted by using a Convolutional Neural Network (CNN) model, and the features of the images are converted into one dimension for the experiment.
Referring to fig. 4 and 6, an intersection or union method is used to construct a probability density function of mixed focal elements, and a mixed gaussian model or a generation countermeasure network is used to estimate the probability density, and the probability density value of the test sample is normalized to generate an evidence function.
Referring to fig. 5, in the generated countermeasure network model used in the method of the present invention, the generated countermeasure network model is composed of a generated model G and a discrimination model D, and the performance of the generated model G and the discrimination model D is continuously improved in the process of mutual countermeasure and iterative optimization in an alternate optimization mode, and finally the data generated by the generated model G is close to the real data distribution, at this time, a large number of samples are generated by the generated model G, and the PDF of the real data is estimated by a probability density estimation method of the histogram.
The experiment in the embodiment of the invention uses two probability density estimation methods, namely a method based on a Gaussian mixture model or a method based on an antagonism network generation; the evidence fusion classification experiment for three image data sets adopts 4 classification evaluation indexes: accuracy, precision, recall, specificity. The pattern classification results based on the evidence combination on each data set by the evidence modeling method based on the ambiguity sample density estimation are shown in table 1.
TABLE 1 index/% (mean.+ -. Std) of different evidence modeling methods on evidence fusion classification
As can be seen from Table 1, compared with the traditional evidence modeling method based on the triangular membership function, the evidence modeling method based on the Gaussian mixture model or the generation of the anti-network ambiguity density estimation is used on each image data set, and the fusion classification accuracy is ideal. The result shows that the novel image mode classification-oriented evidence modeling method for estimating the density of the mixed sample improves the fusion classification performance.
In the field of pattern classification, the image pattern classification technology based on evidence combination can effectively improve classification performance. Evidence functions describe the advantages of uncertainty in the introduction of mixed propositions, namely mixed focal elements, so that the modeling of uncertainty of mixed focal elements is of great importance. However, the traditional evidence modeling method is not intuitive to the modeling mode of the mixed focal element, and ideal results are difficult to obtain by performing evidence fusion classification on BBA functions generated by using the method. Therefore, the invention provides a novel evidence modeling method based on the density estimation of the ambiguity samples for image mode classification. Firstly, constructing a mixed sample of mixed focal elements, then estimating probability density functions of each focal element through a mixed Gaussian model or generating an antagonism network, and finally normalizing the mixed Gaussian model to generate a BBA function to finish evidence modeling. The image mode classification-oriented evidence modeling system constructed based on the invention can effectively improve the fusion classification performance. In summary, the embodiment of the invention discloses an image pattern classification-oriented evidence modeling method based on mixed sample density estimation, which comprises the following steps: step 1, obtaining multi-source images according to three image acquisition sensors of optics, infrared and radar, splicing the three images of the same category, extracting deep abstract features of the images by using a Convolutional Neural Network (CNN) model, and converting the image features into one-dimensional vectors; step 2, constructing a mixed sample belonging to the mixed focal element based on the intersection or union operation among all samples of the category corresponding to the element contained in the mixed focal element; step 3, estimating probability density functions corresponding to the mixed focal elements on all attributes through a mixed Gaussian model or a generated countermeasure network based on the mixed samples constructed in the step 2; estimating a probability density function of the single-point focal element through a single Gaussian model or a generated countermeasure network; step 4, calculating a probability density value corresponding to each focal element of the test sample based on the probability density function obtained by calculation in the step 3, normalizing the probability density values of the test sample on all the focal elements, thereby generating a BBA function, and completing evidence modeling; aiming at the defect that the existing evidence modeling method does not directly model the mixed focal element, the invention provides a novel evidence modeling method based on mixed sample density estimation for image mode classification, which can directly generate BBA functions of the mixed focal element, more intuitively model the mixed focal element and more effectively improve the performance of the mixed focal element when the mixed focal element is applied to image mode classification based on evidence combination.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, one skilled in the art may make modifications and equivalents to the specific embodiments of the present invention, and any modifications and equivalents not departing from the spirit and scope of the present invention are within the scope of the claims of the present invention.

Claims (5)

1. An image pattern classification-oriented evidence modeling method based on mixed sample density estimation is characterized by comprising the following steps of:
acquiring multi-source images, classifying the image categories, splicing the images classified into the same image category, and acquiring one-dimensional vector image characteristics of the spliced images;
defining each image category as a single-point focal element based on the one-dimensional vector image characteristics, and defining a plurality of image category sets as mixed focal elements; constructing an ambiguity sample belonging to the mixed focal element through the intersection or union operation among all samples of a plurality of image categories corresponding to each mixed focal element;
estimating probability density functions corresponding to the mixed focal elements on all attributes through a mixed Gaussian model or a generated countermeasure network based on the established mixed samples; estimating a probability density function of the single-point focal element through a single Gaussian model or generating an countermeasure network;
based on the obtained probability density function, solving a probability density value corresponding to the probability density function of the sample to be modeled on each focal element, normalizing the probability density values of the sample to be modeled on all the focal elements to generate a basic credibility distribution function, and completing evidence modeling;
Wherein, the mixed samples belonging to the mixed focal elements are constructed by the mixed operation or the parallel operation among all samples of a plurality of image categories corresponding to each mixed focal element,
the intersection operation classifies all samples of the mixed focal element including the image category by using a Bayes decision basic classifier, and classifies the samples with wrong classification as mixed samples;
the sum operation classifies all samples comprising image categories for the mixed focal element as mixed samples;
estimating probability density functions corresponding to the mixed focal elements on all attributes through a mixed Gaussian model or a generated countermeasure network based on the mixed samples obtained by construction; in estimating the probability density function of the single focus element by a single Gaussian model or generating an countermeasure network,
for single point focal element { θ ] i The computational expression of the single gaussian model is,
wherein,represents a single point focal element { θ } i Probability density function on attribute j,/>Sum sigma ij Respectively the average value and standard deviation of the samples of the class i on the attribute j;
for the mixed focal element H, the calculation expression of the mixed Gaussian model is as follows,
wherein,probability density function representing mixed bin H on property j +.>Gaussian model, pi, representing class t t Representation->Weights of (2);
parameters (parameters)And pi t The calculated expression of (c) is that,
wherein n is t The number of samples of class t in the mixed samples corresponding to the mixed focal element H is represented, N represents the total number of the mixed samples corresponding to the mixed focal element H,sample representing belonging to category t>A value on attribute j;
the probability density function based on the obtained probability density function is used for solving the probability density value corresponding to the sample to be modeled on the probability density function of each focal element, normalizing the probability density values of the sample to be modeled on all the focal elements to generate a basic credibility distribution function, and completing the evidence modeling,
normalizing probability density values of a sample to be built on all focal elements to generate a calculation expression of a basic credibility allocation function,
wherein m is j (C) BBA function, x representing focal element C on attribute j j The value of the test sample on attribute j,represents x j Probability density values in the PDF corresponding to focal element C.
2. The method for modeling evidence based on mixed sample density estimation for image pattern classification according to claim 1, wherein the steps of obtaining a multi-source image and classifying the image types, stitching the images classified into the same image type, and obtaining one-dimensional vector image features of the stitched image specifically comprise:
Acquiring multi-source images through three image acquisition sensors, namely an optical image acquisition sensor, an infrared image acquisition sensor and a radar image acquisition sensor, classifying the multi-source images according to preset image categories, and splicing images in the same image category to acquire spliced images; and extracting deep abstract features by using a convolutional neural network model based on the spliced images, and converting the image features into one-dimensional vector image features.
3. The image pattern classification-oriented evidence modeling method based on ambiguity sample density estimation of claim 2, wherein the preset image categories include a preset number of all-vehicle categories, a preset number of all-aircraft categories, and a preset number of vehicle-aircraft hybrid categories.
4. The image pattern classification-oriented evidence modeling method based on the mixed sample density estimation according to claim 1, wherein the mixed sample obtained based on construction estimates probability density functions corresponding to mixed focal elements on all attributes through a mixed Gaussian model or a generation countermeasure network; in estimating the probability density function of the single focus element by a single Gaussian model or generating an countermeasure network,
the structure of the generated countermeasure network model is a generated model G and a discrimination model D, the calculation expression of the objective function is,
Where x is the sampling of the distribution p data (x) Is sampled in a priori distribution p z False data of (z), E (·) represents expectations.
5. An image pattern classification-oriented evidence modeling system based on a mixed sample density estimation, comprising:
the image and feature acquisition module is used for acquiring multi-source images, classifying the image categories, splicing the images classified into the same image category, and acquiring one-dimensional vector image features of the spliced images;
the mixed sample acquisition module is used for defining each image category as a single-point focal element and defining a plurality of image category sets as mixed focal elements based on the one-dimensional vector image characteristics; constructing an ambiguity sample belonging to the mixed focal element through the intersection or union operation among all samples of a plurality of image categories corresponding to each mixed focal element;
the probability density function acquisition module is used for estimating probability density functions corresponding to the mixed focal elements on all attributes through a mixed Gaussian model or a generated countermeasure network based on the established and obtained mixed samples; estimating a probability density function of the single-point focal element through a single Gaussian model or generating an countermeasure network;
the basic reliability distribution function acquisition module is used for solving probability density values corresponding to the probability density functions of the samples to be modeled on each focal element based on the obtained probability density functions, normalizing the probability density values of the samples to be modeled on all the focal elements to generate a basic reliability distribution function, and completing evidence modeling;
Wherein, the mixed samples belonging to the mixed focal elements are constructed by the mixed operation or the parallel operation among all samples of a plurality of image categories corresponding to each mixed focal element,
the intersection operation classifies all samples of the mixed focal element including the image category by using a Bayes decision basic classifier, and classifies the samples with wrong classification as mixed samples;
the sum operation classifies all samples comprising image categories for the mixed focal element as mixed samples;
estimating probability density functions corresponding to the mixed focal elements on all attributes through a mixed Gaussian model or a generated countermeasure network based on the mixed samples obtained by construction; in estimating the probability density function of the single focus element by a single Gaussian model or generating an countermeasure network,
for single point focal element { θ ] i The computational expression of the single gaussian model is,
wherein,represents a single point focal element { θ } i Probability density function on property j, +.>Sum sigma ij Respectively of category iAverage and standard deviation of the samples of (a) on attribute j;
for the mixed focal element H, the calculation expression of the mixed Gaussian model is as follows,
wherein,probability density function representing mixed bin H on property j +.>Gaussian model, pi, representing class t t Representation->Weights of (2);
parameters (parameters)And pi t The calculated expression of (c) is that,
wherein n is t The number of samples of class t in the mixed samples corresponding to the mixed focal element H is represented, N represents the total number of the mixed samples corresponding to the mixed focal element H,sample representing belonging to category t>A value on attribute j;
the probability density function based on the obtained probability density function is used for solving the probability density value corresponding to the sample to be modeled on the probability density function of each focal element, normalizing the probability density values of the sample to be modeled on all the focal elements to generate a basic credibility distribution function, and completing the evidence modeling,
normalizing probability density values of a sample to be built on all focal elements to generate a calculation expression of a basic credibility allocation function,
wherein m is j (C) BBA function, x representing focal element C on attribute j j The value of the test sample on attribute j,represents x j Probability density values in the PDF corresponding to focal element C.
CN202111243672.1A 2021-10-25 2021-10-25 Evidence modeling method and system based on mixed sample density estimation for image mode classification Active CN113989553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111243672.1A CN113989553B (en) 2021-10-25 2021-10-25 Evidence modeling method and system based on mixed sample density estimation for image mode classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111243672.1A CN113989553B (en) 2021-10-25 2021-10-25 Evidence modeling method and system based on mixed sample density estimation for image mode classification

Publications (2)

Publication Number Publication Date
CN113989553A CN113989553A (en) 2022-01-28
CN113989553B true CN113989553B (en) 2024-04-05

Family

ID=79741235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111243672.1A Active CN113989553B (en) 2021-10-25 2021-10-25 Evidence modeling method and system based on mixed sample density estimation for image mode classification

Country Status (1)

Country Link
CN (1) CN113989553B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650785A (en) * 2016-11-09 2017-05-10 河南大学 Weighted evidence fusion method based on evidence classification and conflict measurement
CN110033028A (en) * 2019-03-19 2019-07-19 河南大学 Conflicting evidence fusion method based on arithmetic average approach degree
CN112733915A (en) * 2020-12-31 2021-04-30 大连大学 Situation estimation method based on improved D-S evidence theory

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8463051B2 (en) * 2008-10-16 2013-06-11 Xerox Corporation Modeling images as mixtures of image models
US11898954B2 (en) * 2012-08-01 2024-02-13 Owl biomedical, Inc. Particle manipulation system with camera/classifier confirmation and deep learning algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650785A (en) * 2016-11-09 2017-05-10 河南大学 Weighted evidence fusion method based on evidence classification and conflict measurement
CN110033028A (en) * 2019-03-19 2019-07-19 河南大学 Conflicting evidence fusion method based on arithmetic average approach degree
CN112733915A (en) * 2020-12-31 2021-04-30 大连大学 Situation estimation method based on improved D-S evidence theory

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
利用证据神经网络的多分类器系统构造;和红顺;韩德强;杨艺;;西安交通大学学报(第11期);全文 *
证据理论与模糊分布相结合的可靠性方法研究;曹亮;陶友瑞;;机械设计(第10期);全文 *

Also Published As

Publication number Publication date
CN113989553A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US20190318158A1 (en) Multi-pose face feature point detection method based on cascade regression
CN110738247B (en) Fine-grained image classification method based on selective sparse sampling
CN109101108B (en) Method and system for optimizing human-computer interaction interface of intelligent cabin based on three decisions
CN110796199B (en) Image processing method and device and electronic medical equipment
CN101893704A (en) Rough set-based radar radiation source signal identification method
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN107274401A (en) A kind of High Resolution SAR Images Ship Detection of view-based access control model attention mechanism
Widynski et al. A multiscale particle filter framework for contour detection
CN104331716A (en) SVM active learning classification algorithm for large-scale training data
CN102629321B (en) Facial expression recognition method based on evidence theory
CN110135502A (en) A kind of image fine granularity recognition methods based on intensified learning strategy
CN113222149B (en) Model training method, device, equipment and storage medium
CN114255403A (en) Optical remote sensing image data processing method and system based on deep learning
CN115797736B (en) Training method, device, equipment and medium for target detection model and target detection method, device, equipment and medium
CN110163130B (en) Feature pre-alignment random forest classification system and method for gesture recognition
KR20100116404A (en) Method and apparatus of dividing separated cell and grouped cell from image
Zeren et al. Comparison of SSD and faster R-CNN algorithms to detect the airports with data set which obtained from unmanned aerial vehicles and satellite images
CN113989553B (en) Evidence modeling method and system based on mixed sample density estimation for image mode classification
Valldor et al. Firearm detection in social media images
CN116109649A (en) 3D point cloud instance segmentation method based on semantic error correction
CN115659253A (en) Underwater target identification method based on multi-dimensional model fusion
CN111126617B (en) Method, device and equipment for selecting fusion model weight parameters
CN114596487A (en) Switch on-off state identification method based on self-attention mechanism
Tayyub et al. Explaining deep neural networks for point clouds using gradient-based visualisations
CN112861689A (en) Searching method and device of coordinate recognition model based on NAS technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant