CN107766933B - Visualization method for explaining convolutional neural network - Google Patents
Visualization method for explaining convolutional neural network Download PDFInfo
- Publication number
- CN107766933B CN107766933B CN201711001423.5A CN201711001423A CN107766933B CN 107766933 B CN107766933 B CN 107766933B CN 201711001423 A CN201711001423 A CN 201711001423A CN 107766933 B CN107766933 B CN 107766933B
- Authority
- CN
- China
- Prior art keywords
- decision
- neuron
- semantics
- tree
- semantic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a visualization method for explaining a convolutional neural network, which comprises the following steps: preparing a convolutional neural network model M and a training set S thereof; extracting all judgment conditions used by the model M in the decision process; determining the semantics of the neurons by utilizing the matching degree of the semantics in the neurons and the human corpus, and generating understandable semantics for all judgment conditions; a decision tree T is formed, and the decision process of the decision tree T is used as the decision process of the model M; converting the decision tree T into a tree flow graph; making a neuron semantic view; making a neuron relation graph; making a decision data flow graph; and constructing an interactive visualization system.
Description
Technical Field
The invention relates to machine learning and visualization techniques, and in particular to a visualization method for interpreting deep convolutional neural networks.
Background
Machine learning has become one of the most efficient tools for data analysis. Has received a great deal of attention in the industrial and academic fields. Although the machine learning model has extremely high efficiency, the opacity and the unexplained performance of the model are the most important causes of the problem. Linear regression is found to have the highest interpretability and the lowest learning power if the machine learning model is viewed in terms of interpretability and learning power, whereas neural network models have the lowest interpretability and the highest learning power, on the contrary. At the same time, in industry, users using neural networks for predictions need to understand how neural networks make decisions. Academically, researchers also want deeper knowledge of neural networks. Therefore, in order to deepen the understanding of the neural network, the field of interpretation of the neural network has received wide attention, and the improvement of the understanding of the neural network is also beneficial to the development of the field of deep learning.
To open the black box of the neural network, methods that have been proposed so far are roughly divided into three types: one is to use a highly explanatory model to fit the decision boundary [1] of the neural network model, for example, linear regression is used to interpret local samples, and the method can only interpret local samples and cannot interpret the whole. And secondly, directly extracting decision rules [2,3,4,5] of the neural network model by using a rule-based method, wherein the rules can be IF-THEN and the like. The method can provide a decision process of the whole neural network model, and is also the key point of research in the current field. However, the method is only suitable for a shallow neural network (only one hidden layer), once a deep neural network is encountered, since the extracted rule is too complex, a human cannot analyze and understand the rule, so that the method fails, and the complexity of the rule extracted by the method is further increased when a convolutional neural network is encountered. Three is the visualization of deep learning features proposed by paying jade and the like (patent publication number CN106909945A), but only analyzing the convolution features qualitatively is equivalent to verifying that the features learned by the deep learning model are from low level to high level, and the decision process of the model cannot be understood. Therefore, these methods have a limited range of use and are not universal.
Reference documents:
[1]M.T.Ribeiro,S.Singh,and C.Guestrin.Why should i trust you?:Explaining the predictions of any classifier.In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,pp.1135–1144.ACM,2016.
[2]M.Craven and J.W.Shavlik.Using sampling and queries to extract rules from trained neural networks.In ICML,pp.37–45,1994.
[3]M.Craven and J.W.Shavlik.Extracting tree-structured representations of trained networks.In Advances in neural information processing systems,pp.24–30,1996.
[4]R.Krishnan,G.Sivakumar,and P.Bhattacharya.Extracting decision trees from trained neural networks.Pattern Recognition,32(12),1999.
[5]M.Sato and H.Tsukimoto.Rule extraction from neural networks via decision tree induction.In Neural Networks,2001.Proceedings.IJCNN’01.International Joint Conference on,vol.3,pp.1870–1875.IEEE,2001.
[6] the characteristic visualization and model evaluation method for deep learning of jade, a small glowing tinker, royal ocean is as follows, China, CN106909945A [ P ], 2017-06-30.
Disclosure of Invention
It is an object of the present invention to overcome the above-mentioned deficiencies of the prior art and to provide a visualization method for interpreting a convolutional neural network to facilitate understanding of the decision process of the convolutional neural network. The technical scheme is as follows:
a visualization method of interpreting a convolutional neural network, comprising the steps of:
step 1: preparing a convolutional neural network model M and a training set S thereof;
step 2: extracting all judgment conditions used by the model M in the decision process, and using C ═ C1,c2,...,cnRepresents a set of all judgment conditions of the model M;
and step 3: determining semantics of neurons by matching degrees of the semantics in the neurons and the human corpus, generating understandable semantics for all decision conditions in the set C, and using the set CsemanticsAnd expressing, wherein the matching degree is calculated in the following mode:
with ScRepresenting a semantic image of the type input to the convolutional neural network whose image pixels activated by the l-th neuron are represented by MlThe matching degree value of the neuron and the semantic meaning is obtained by the following formula:
the semantics of six categories, namely color, texture, material, scene, local part and object, are totally included, the matching degree values corresponding to the six categories are calculated, and the semantics with the highest matching degree value represent the semantics of the neuron;
and 4, step 4: use of CsemanticsGenerating a decision tree T, and taking the decision process of the decision tree T as the decision process of the model M;
and 5: converting the decision tree T into a tree flow graph to reduce the complexity of the decision tree T;
step 6: making neuron semantic view Vsemantic(ii) a Tree flow graph Vtree-flowIs associated with the semantic view VsemanticA group of pictures related to the semantics are neutralized;
and 7: making a neuron relation graph Vrelation(ii) a Projecting the judgment conditions of all neurons on a two-dimensional plane according to the similarity for finding out wrongly labeled semantics, wherein the projection method comprises the following steps:
1) respectively representing a certain neuron by using x and y, measuring the similarity of the x and the y by using a kernel function, and obtaining a semantic clustering result after similarity calculation is carried out on all the neurons;
2) keeping semantic clustering results in a high-dimensional space, and redistributing all neurons on a two-dimensional plane by using T distribution in a low-dimensional space;
and 8: making a decision dataflow graph VdecisionUnderstanding the decision conditions in a data-driven manner, and displaying reasonable and unreasonable evidence of model M decision;
and step 9: with Vtree-flowMainly, Vsemantic,Vrelation,VdecisionConstructing an interactive visualization system for assistance;
step 10: the decision making process of the model M is analyzed with the system.
Preferably, in step 2, the method for extracting the determination condition is as follows:
1) inputting a neuron, and clustering the neuron into a certain category according to the weight value of the neuron;
2) setting the weight value of the neuron as the mean value of the weight values of the class;
3) eliminating classes that do not affect the output;
4) keeping the weight value of the neuron always as the mean value of the corresponding category, and searching for the optimal bias by using back propagation;
5) and forming a judgment condition corresponding to the neuron.
The transformation method in step 5 is as follows:
1) converting the nodes of the decision tree into rectangles with corresponding lengths according to the data quantity of the nodes;
2) dividing node rectangles according to categories to obtain internal 'category small blocks';
3) the decision tree is converted into a decision tree by connecting the category small blocks at the same position in the parent node and the child node by using arcs in sequence
A tree-flow graph;
4) the understandable semantics of each decision condition on the tree flow graph are identified.
The invention first selects the convolutional neural network model to be explained and the corresponding training data. And then data preprocessing is carried out, a rule extraction technology is used for extracting all judgment conditions of the convolutional neural network in a decision process, human comprehensible semantics are given, and then a decision tree is generated. And finally, the decision tree is recombined by utilizing a visualization technology, and a plurality of views are provided for matching analysis, so that the decision process of the convolutional neural network is convenient to understand.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 is a very complex decision tree generated
FIG. 3 understands a visualization system of a convolutional neural network
FIG. 4 Tree flow graph (a) and neuron semantic graph (b)
FIG. 5 neuron relationship diagram
FIG. 6 decision data flow diagram
Detailed Description
The method provided by the invention comprises the following specific treatment processes: model and data preparation, data preprocessing, multi-view visualization and the like.
1. Model and data preparation
Model and data preparation are input into the visualization method, the model can be AlexNet, complex VGG16 and the like, data is a data set used for training the model, the model and the data can be found in some open source libraries, such as Caffe, Tensorflow and the like, and the model and the training data thereof are used in a preprocessing stage of the data.
2. Data pre-processing
The purpose of preprocessing is to provide data for visualization, which mainly comprises: extracting judgment conditions, generating semantics, generating a decision tree and the like.
(1) Judgment condition extraction:
the appropriate decision condition form is selected according to the complexity of the model, and the form can be roughly divided into IF-THEN, M-of-N and Oblique. In the invention, the IF-THEN form which is most suitable for human understanding is selected, and the model is subjected to IF-THEN rule extraction, so that the following judgment conditions are generated:
if X is smaller than threshold1then Y is Value1
if X∈[threshold1,threshold2]then Y is Value2
if X∈[threshold2,threshold3]then Y is Value3
else Y is Value4
the above decision conditions are few and linear, but as the complexity of the model increases, especially for deep learning models such as convolutional neural networks, in order to accurately partition the entire model space, the extracted decision conditions will reach several hundred megabytes. Therefore, without the use of visualization techniques, the decision tree formed by these decision conditions cannot be understood at all by humans.
The method for extracting the determination conditions is as follows:
the first step is as follows: inputting a neuron, clustering into a certain category according to the weight value of the neuron,
the second step is that: setting the weighted value of the neuron as the mean value of the class
The third step: eliminating classes that do not affect output
The fourth step: keeping the weight value of the neuron always as the mean value of the corresponding category, and finding the optimal bias by using back propagation
The fifth step: forming a judgment condition corresponding to the neuron
(2) Semantic generation
The purpose of semantic generation is to give human-understandable semantics to all decision conditions extracted in the first step.
In a convolutional neural network, each neuron has only one value, but corresponds to a decision condition. After the extraction of the judgment condition in the first step is completed, the judgment condition with the shape of X > being 3.0 is obtained, but X represents certain semantic meaning, such as dog nose, vehicle tire, vehicle window and the like. Therefore, to understand the decision process of the convolutional neural network, all xs must be assigned human-understandable semantics.
In the invention, the semantic meaning of the neuron is determined by the matching degree of the semantic meaning in the neuron and the human corpus. With ScRepresenting a semantic image of the type input to the convolutional neural network whose image pixels activated by the l-th neuron are represented by MlThis indicates that IoU values corresponding to the semantic meaning are obtained using equation (1).
There are a total of six categories of semantics, color, texture, material, scene, local, and object respectively. IoU values corresponding to the six classes are calculated, and the semantic meaning with the highest IoU represents the semantic meaning of the neuron.
(3) Generating decision trees
One of the models that is most suitable for human understanding is the decision tree. The decision conditions and the comprehensible semantics extracted in the first two steps are used for representing the convolutional neural network with very poor interpretability as a decision tree, and the decision tree has a great effect on understanding the convolutional network.
The process of generating the decision is as follows:
1) generating an output from a data set of the trained neural network by forward propagation, and using the output as an input to a decision tree, assuming the input is S { (x)1,y1),(x2,y2),...,(xm,ym) And the corresponding attribute set is A ═ a1,a2,...,ad}
2) If the samples in S all belong to the same class C, marking the node as a leaf node, wherein the marking class of the node is C
3) If the samples in S are empty or the values of the samples in S on A are the same, the node is marked as a leaf node, and the class is marked as the class with the largest number of samples in S.
4) According to cross entropyAnd information gain Selecting the optimal partition attribute a*To a, a*Each value of (1)The process is executed circularly:
first, a branch is generated for the node, and then S is enabledvDenotes a in S*Up value ofA subset of samples.
If SvEmpty, the branch node is marked as leaf node, and its class is marked as the most abundant class in the samples in S. Otherwise, re-iterating from the second step, but with attribute a*Removed from attribute set a.
5) And outputting the decision tree of the convolutional neural network model.
3. Multi-view visualization
The principle of the decision tree generated after the data preprocessing is completed can be analyzed, but the generated decision tree is too complex, for example, VGG16 is taken as an example, the printed result is a decision tree with the length of 11 meters and the height of 0.9 meter, and a human cannot analyze and understand the huge decision tree. Therefore, visualization techniques are utilized to facilitate understanding the decision process of the complex decision tree.
The purpose of the multi-view visualization is to understand the decision tree extracted in the first two steps. The method mainly comprises the generation of a plurality of views, including a tree flow graph, a neuron semantic view, a neuron relation view and a decision data flow view. And constructing a visual interactive system by using the four views, and explaining a decision process of the convolutional neural network.
The specific implementation process is as follows:
(1) tree flow graph
In order to solve the problem that the complex decision tree cannot be understood by human, the decision tree is converted into a tree flow graph by using a visualization technology.
The process of generating the tree flow graph is as follows:
1) and converting the nodes of the decision tree into rectangles with corresponding lengths according to the data volume of the nodes of the decision tree according to the characteristic that the data volume contained in each node (judgment condition) of the decision tree from top to bottom is reduced in sequence.
2) And dividing the node rectangles according to the categories to obtain 'category small blocks', wherein the dividing proportion is the data volume corresponding to the categories. In the present invention, the inside of each node rectangle is divided into ten "category tiles", that is, ten categories are corresponded. Meanwhile, the data volume corresponding to each category passing through the node is different, so that the category blocks in the long strip are different in size.
3) The decision tree is converted into a tree flow graph by connecting the category small blocks at the same positions in the parent node and the child node by arcs in sequence.
4) The understandable semantics of each decision condition on the tree flow graph are identified.
(2) Neuronal semantic views
As described above, each neuron has its corresponding semantic meaning, and it is shown that generating a semantic picture corresponding to the semantic meaning can more accurately determine the characteristics of a certain decision condition in a tree flow graph. The step is simple to realize, and the corresponding picture can be directly displayed.
(3) Neuronal relationship views
The understandable semantics assigned to the neurons have a slight probability of being erroneous, and the erroneously labeled semantics can be found out by using the similarity between the neurons. In the invention, all the neurons are projected to a two-dimensional plane according to the similarity of the neurons, and the whole two-dimensional plane also has a clustering effect, namely the neurons with similar semantics are clustered together, and the neurons with different semantics are far away.
The projection method is as follows:
the first step is as follows: and (3) representing the neuron by using x, representing a specific neuron by using subscript, obtaining the similarity of the two neurons by using a kernel function, namely formula (2), and obtaining a semantic clustering result after similarity calculation is carried out on all the neurons.
The second step is that: and (3) reserving semantic clustering results in the high-dimensional space, and redistributing all the neurons on a two-dimensional plane by using T distribution, wherein the T distribution can solve the problem of crowding of the neurons in the high-dimensional space.
(4) Decision dataflow graph
The decision tree divides the data passing through the node into two parts according to the decision condition, and evidence is needed to judge the correctness of the decision condition. Thus, when the node button in the tree flow graph is clicked, correctly classified samples and incorrectly classified samples are displayed in the decision data flow graph. According to the semantics of the judging condition, whether the judging condition is accurate and reasonable can be known by correctly classifying the samples and incorrectly classifying the samples. The steps of the invention are summarized as follows:
step 1: a convolutional neural network model and its training data are prepared.
Step 2: and extracting all judgment conditions of the convolutional neural network model by using a rule extraction technology, and generating semantics for each judgment condition.
And step 3: a decision tree is generated using the extracted decision conditions and semantics, which represents the decision boundaries of the convolutional neural network.
And 4, step 4: the complex decision tree is converted into an interactive tree flow graph.
And 5: to further analyze the tree-flow graph, visualization techniques are used to make other interactive views that aid in understanding.
Step 6: and analyzing the decision process of the convolutional neural network by utilizing the interaction between the tree flow graph and the rest views.
Claims (3)
1. A visualization method of interpreting a convolutional neural network, comprising the steps of:
step 1: preparing a convolutional neural network model M and a training set S thereof;
step 2: extracting all judgment conditions used by the model M in the decision process, and using C ═ C1,c2,...,cnRepresents a set of all judgment conditions of the model M;
and step 3: determining semantics of neurons by matching degrees of the semantics in the neurons and the human corpus, generating understandable semantics for all decision conditions in the set C, and using the set CsemanticsAnd expressing, wherein the matching degree is calculated in the following mode:
let a certain class of semantics be X, with ScRepresenting a semantic image of the type input to the convolutional neural network whose image pixels activated by the l-th neuron are represented by MlThe matching degree value of the neuron and the semantic meaning is obtained by the following formula:
the semantics of six categories, namely color, texture, material, scene, local part and object, are totally included, the matching degree values corresponding to the six categories are calculated, and the semantics with the highest matching degree value represent the semantics of the neuron;
and 4, step 4: use of CsemanticsGenerating a decision tree T, and taking the decision process of the decision tree T as the decision process of the model M;
and 5: converting the decision tree T into a tree flow graph to reduce the complexity of the decision tree T;
step 6: making neuron semantic view Vsemantic(ii) a Tree flow graph Vtree-flowIs associated with the semantic view VsemanticA group of pictures related to the semantics are neutralized;
and 7: making a neuron relation graph Vrelation(ii) a Projecting the judgment conditions of all neurons on a two-dimensional plane according to the similarity for finding out wrongly labeled semantics, wherein the projection method comprises the following steps:
1) respectively representing a certain neuron by using x and y, measuring the similarity of the x and the y by using a kernel function, and obtaining a semantic clustering result after similarity calculation is carried out on all the neurons;
2) keeping semantic clustering results in a high-dimensional space, and redistributing all neurons on a two-dimensional plane by using T distribution in a low-dimensional space;
and 8: making a decision dataflow graph VdecisionUnderstanding the decision conditions in a data-driven manner, and displaying reasonable and unreasonable evidence of model M decision;
and step 9: with Vtree-flowMainly, Vsemantic,Vrelation,VdecisionConstructing an interactive visualization system for assistance;
step 10: the decision making process of the model M is analyzed with the system.
2. A visualization method according to claim 1, wherein in step 2, the method of extracting the determination condition is as follows:
1) inputting a neuron, and clustering the neuron into a certain category according to the weight value of the neuron;
2) setting the weight value of the class neuron as the mean value of the class weight value;
3) eliminating classes that do not affect the output;
4) keeping the weight value of the neuron always as the mean value of the corresponding category, and searching for the optimal bias by using back propagation;
5) and forming a judgment condition corresponding to the neuron.
3. A visualization method according to claim 1, wherein the method of step 5 transformation is as follows:
1) converting the nodes of the decision tree into rectangles with corresponding lengths according to the data quantity of the nodes;
2) dividing node rectangles according to categories to obtain internal 'category small blocks';
3) connecting category small blocks at the same positions in a father node and a child node by using arcs in sequence, and converting the decision tree into a tree flow graph;
4) the understandable semantics of each decision condition on the tree flow graph are identified.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711001423.5A CN107766933B (en) | 2017-10-24 | 2017-10-24 | Visualization method for explaining convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711001423.5A CN107766933B (en) | 2017-10-24 | 2017-10-24 | Visualization method for explaining convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107766933A CN107766933A (en) | 2018-03-06 |
CN107766933B true CN107766933B (en) | 2021-04-23 |
Family
ID=61268596
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711001423.5A Active CN107766933B (en) | 2017-10-24 | 2017-10-24 | Visualization method for explaining convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107766933B (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537328A (en) * | 2018-04-13 | 2018-09-14 | 众安信息技术服务有限公司 | Method for visualizing structure neural network |
CN109858506B (en) * | 2018-05-28 | 2022-11-18 | 哈尔滨工程大学 | Visualization algorithm for classification result of convolutional neural network |
CN109344957A (en) * | 2018-08-01 | 2019-02-15 | 浙江工业大学 | Convolutional neural networks visual analysis method based on comparison in difference |
CN109934226A (en) * | 2019-03-13 | 2019-06-25 | 厦门美图之家科技有限公司 | Key area determines method, apparatus and computer readable storage medium |
CN110046654A (en) * | 2019-03-25 | 2019-07-23 | 东软集团股份有限公司 | A kind of method, apparatus and relevant device of identification classification influence factor |
US11222242B2 (en) | 2019-08-23 | 2022-01-11 | International Business Machines Corporation | Contrastive explanations for images with monotonic attribute functions |
CN111582376B (en) * | 2020-05-09 | 2023-08-15 | 抖音视界有限公司 | Visualization method and device for neural network, electronic equipment and medium |
CN111723810B (en) * | 2020-05-11 | 2022-09-16 | 北京航空航天大学 | Interpretability method of scene recognition task model |
CN111738416B (en) * | 2020-06-17 | 2023-07-18 | 北京字节跳动网络技术有限公司 | Model synchronous updating method and device and electronic equipment |
CN112115779B (en) * | 2020-08-11 | 2022-05-13 | 浙江师范大学 | Interpretable classroom student emotion analysis method, system, device and medium |
CN112491796B (en) * | 2020-10-28 | 2022-11-04 | 北京工业大学 | Intrusion detection and semantic decision tree quantitative interpretation method based on convolutional neural network |
CN112101574B (en) * | 2020-11-20 | 2021-03-02 | 成都数联铭品科技有限公司 | Machine learning supervised model interpretation method, system and equipment |
CN112884021B (en) * | 2021-01-29 | 2022-09-02 | 之江实验室 | Visual analysis system oriented to deep neural network interpretability |
CN112691374B (en) * | 2021-03-24 | 2021-06-22 | 中至江西智能技术有限公司 | Evidence chain generation method and device, readable storage medium and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787557A (en) * | 2016-02-23 | 2016-07-20 | 北京工业大学 | Design method of deep nerve network structure for computer intelligent identification |
CN106909945A (en) * | 2017-03-01 | 2017-06-30 | 中国科学院电子学研究所 | The feature visualization and model evaluation method of deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10552730B2 (en) * | 2015-06-30 | 2020-02-04 | Adobe Inc. | Procedural modeling using autoencoder neural networks |
-
2017
- 2017-10-24 CN CN201711001423.5A patent/CN107766933B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787557A (en) * | 2016-02-23 | 2016-07-20 | 北京工业大学 | Design method of deep nerve network structure for computer intelligent identification |
CN106909945A (en) * | 2017-03-01 | 2017-06-30 | 中国科学院电子学研究所 | The feature visualization and model evaluation method of deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN107766933A (en) | 2018-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107766933B (en) | Visualization method for explaining convolutional neural network | |
Bilal et al. | Do convolutional neural networks learn class hierarchy? | |
CN111275688B (en) | Small target detection method based on context feature fusion screening of attention mechanism | |
CN110443293B (en) | Zero sample image classification method for generating confrontation network text reconstruction based on double discrimination | |
Narihira et al. | Learning lightness from human judgement on relative reflectance | |
WO2024087639A1 (en) | Remote sensing image recommendation method based on content understanding | |
CN109063649B (en) | Pedestrian re-identification method based on twin pedestrian alignment residual error network | |
CN107145836B (en) | Hyperspectral image classification method based on stacked boundary identification self-encoder | |
CN109783666A (en) | A kind of image scene map generation method based on iteration fining | |
CN106408030A (en) | SAR image classification method based on middle lamella semantic attribute and convolution neural network | |
CN105719285A (en) | Pedestrian detection method based on directional chamfering distance characteristics | |
CN103714148B (en) | SAR image search method based on sparse coding classification | |
CN109886161A (en) | A kind of road traffic index identification method based on possibility cluster and convolutional neural networks | |
CN112990282B (en) | Classification method and device for fine-granularity small sample images | |
CN114461890A (en) | Hierarchical multi-modal intellectual property search engine method and system | |
AL-Behadili | Classification algorithms for determining handwritten digit | |
Kouzani | Road-sign identification using ensemble learning | |
CN110569761B (en) | Method for retrieving remote sensing image by hand-drawn sketch based on counterstudy | |
Yao et al. | An empirical study of the convolution neural networks based detection on object with ambiguous boundary in remote sensing imagery—A case of potential loess landslide | |
Wang et al. | Learning to group and label fine-grained shape components | |
CN113378962B (en) | Garment attribute identification method and system based on graph attention network | |
Klomsae et al. | A string grammar fuzzy-possibilistic C-medians | |
CN114187183A (en) | Fine-grained insect image classification method | |
CN111339332B (en) | Three-dimensional volume data retrieval method based on tree structure topological graph | |
CN106570514A (en) | Automobile wheel hub classification method based on word bag model and support vector machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |