CN107423815B - Low-quality classified image data cleaning method based on computer - Google Patents

Low-quality classified image data cleaning method based on computer Download PDF

Info

Publication number
CN107423815B
CN107423815B CN201710665692.5A CN201710665692A CN107423815B CN 107423815 B CN107423815 B CN 107423815B CN 201710665692 A CN201710665692 A CN 201710665692A CN 107423815 B CN107423815 B CN 107423815B
Authority
CN
China
Prior art keywords
sum
data
neural network
convolutional neural
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710665692.5A
Other languages
Chinese (zh)
Other versions
CN107423815A (en
Inventor
李玉鑑
余华擎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201710665692.5A priority Critical patent/CN107423815B/en
Publication of CN107423815A publication Critical patent/CN107423815A/en
Application granted granted Critical
Publication of CN107423815B publication Critical patent/CN107423815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a computer-based low-quality classified image data cleaning method, which can effectively clean low-quality classified image data collected from the Internet in batches, thereby obtaining higher-quality image data for training a classification model with higher recognition rate. The specific process comprises the following steps: firstly, training a preliminary convolutional neural network by directly utilizing low-quality classified image data, then identifying the data by the network, cleaning the images with the pseudo probability of being identified as the type to a certain degree or the image types with the quantity less than the certain degree identified by the model, and repeating the process until the identification rate of all the obtained image data types reaches the preset standard. The comparison experiment shows that the classification quality and the recognition level of the image data can be effectively improved.

Description

Low-quality classified image data cleaning method based on computer
Technical Field
A cleaning method of low-quality classified image data based on a convolutional neural network can effectively clean the low-quality classified image data collected from the Internet in batches, so that higher-quality image data can be obtained and used for training a classification model with higher recognition rate, and belongs to the technical field of artificial neural networks.
Background
Artificial Neural Networks (ANN) are a research hotspot in the field of Artificial intelligence since the 80 s of the 20 th century. The method abstracts the human brain neuron network from the information processing angle, establishes a certain simple model, and forms different networks according to different connection modes. It is also often directly referred to in engineering and academia as neural networks or neural-like networks. A neural network is an operational model, which is formed by connecting a large number of nodes (or neurons). Each node represents a particular output function, called the excitation function. Every connection between two nodes represents a weighted value, called weight, for the signal passing through the connection, which is equivalent to the memory of the artificial neural network. The output of the network is different according to the connection mode of the network, the weight value and the excitation function. The network itself is usually an approximation to some algorithm or function in nature, and may also be an expression of a logic strategy.
A Convolutional Neural Network (CNN) is a kind of feedforward Neural Network, and is also a kind of artificial Neural Network, and its artificial neurons can respond to peripheral units in a part of coverage range, and has excellent performance for large-scale image processing. Because the complexity of the feedback neural network can be effectively reduced due to the unique network structure, the convolutional neural network has become a research hotspot in the field of current voice analysis and image recognition. The weight sharing network structure of the system is more similar to a biological neural network, the complexity of a network model is reduced, and the number of weights is reduced. The advantage is more obvious when the input of the network is a multi-dimensional image, so that the image can be directly used as the input of the network, and the complex characteristic extraction and data reconstruction process in the traditional recognition algorithm is avoided. Compared with the traditional neural network, the convolutional neural network has the following characteristics:
1. sparse connection (Sparse Connectivity)
The convolutional network exploits the spatial local characteristics of the image by forcing the use of a local connection pattern between two adjacent layers, the hidden layer elements at the m-th layer having connections only to local regions of the input elements at the m-1 th layer, these local regions of the m-1 th layer being called spatially continuous receptive fields.
2. Weight sharing (Shared Weights)
In the convolutional neural network, each sparse filter covers the whole visible area through sharing weight, and the units sharing the weight form a feature mapping and form a feature extraction layer, namely a convolutional layer through matching with sparse connection.
3. Pooling layer (Pooling L eye)
The pooling layer is another building block of the convolutional neural network, its function is to reduce the number of parameters and computations in the network by progressively reducing the spatial size of the tokens. The pooling layer operates independently on each of the profiles.
In addition, the convolutional neural network also includes elements of a conventional neural network, such as a full link layer and common nonlinear activation functions sigmoid, tanh, Re L U, etc.
Common datasets are PASCA L VOC, MNIST, ImageNet, CIFAR-10 and the like, wherein the ImageNet comprises 22K kinds of 15M high-resolution labeled images, and the images are collected in the network and are manually labeled and are often used for classification performance detection of the convolutional neural network model.
The data described above are generic and professional, subject to extensive examination and manual labeling. However, for common application-level data, some kind of images that can be acquired may be from internet crawlers, which necessarily include much noise, and how to clean high-quality data from the images and give a certain evaluation mode is the focus of the present invention. After the data with higher quality is obtained from the noise data, the data can be used for the training of the convolutional neural network, thereby achieving the purpose of certain practical application.
Disclosure of Invention
The technical scheme adopted by the invention is a cleaning method of low-quality classified image data based on a convolutional neural network, which comprises the following steps:
a) downloading image data with labels from the Internet in batches, and arranging to obtain an image data set DataSet0 with M types in total, wherein the number of images contained in the ith type is Ni,i=1,2,3…M;
b) Training a convolutional neural network CNN0 by using DataSet0, and specifically comprising the following steps:
i. constructing a convolutional neural network model, and fixing the structure of the network model to be unchanged;
randomly taking a certain proportion (such as 80 percent and 90 percent) of the DataSet0 as a training set of the convolutional neural network;
using the part of the untrained set in the DataSet0 as a test set of the convolutional neural network;
training CNN0, and recording the network test recognition rate as Acc0 after iterating to the specified times;
c) in DataSet0, the length N is constructed for the ith class of imagesiOne-dimensional image self-identification array KiThe method comprises the following specific steps:
i. the CNN0 is used to identify the image data of DataSet0, and the pseudo probability of identifying the jth image of the ith class as the kth class is recorded as pijkK is 1,2,3 … M, and these pseudo probabilities are ordered from large to small;
if K ═ i exists in the top L (e.g., L ═ 10) pseudo probabilities after sorting, then the self-recognition rate K is recordedij=pijkOtherwise, recording Kij=0;
d) Analyzing the self-recognition array K, and cleaning the low-quality part in the ith type image data:
i. calculating the average value of the self-recognition rate of the ith class of images:
Figure GDA0002528035450000031
calculating the standard deviation of the self-recognition rate of the ith type of image:
Figure GDA0002528035450000032
calculating a boundary value SepVal of the i-th type image with low recognition rate mu-sigma α, wherein 1 is not less than α and not more than 10 and is an integer, and the SepVal is more than 0;
in class i images, if K is presentij<SepVal, then wash out the j picture; obtaining a data set DataSet1 after cleaning;
e) performing convolutional neural network training again in the same way by using DataSet1 to obtain a network test recognition rate Acc1, recording and comparing the network test recognition rate with Acc0 and confirming whether cleaning is effective or not;
f) in DataSet1, counting the number of ith class images again, and recording the number of each class image as N'iTo N'iA few classes were analyzed and cleaned to reduce the impact of low quality data classes on the convolutional neural network:
i. calculating the average value of the number of current M types of images:
Figure GDA0002528035450000033
ii, calculating whenStandard deviation of the number of top M category images:
Figure GDA0002528035450000034
calculating a cut-off value SepVal ═ μ - σ α for the "minority" number of images, 1 ≦ α ≦ 10 and is an integer, and SepVal > 0;
counting M classes of classes with the number of classes lower than SepVal in the M class image;
v, recording the SUM of the M types of quantity as SUM, and recording the SUM of the M types of quantity as SUM;
if M/M is far more than SUM/SUM, judging that the M class is a few classes and needing to be cleaned; if the M/M is close to the SUM/SUM, the number of M types is considered to be normal, and the cleaning treatment is not needed.
g) Performing convolutional neural network training again in the same way by using the cleaned data set DataSet2 to obtain a network test recognition rate Acc2, recording and comparing the network test recognition rate with Acc1 and confirming whether cleaning is effective or not;
h) repeating the steps (d) and (f) according to the condition of the obtained data set to obtain the cleaned data categories M 'class, M' < M;
i) the quality of the m 'class total sum' image data remaining after cleaning was evaluated:
i. obtaining all data of the m 'class in DataSet0, wherein the total quantity is SUM', SUM '> SUM';
ii, performing convolutional neural network training in the same way on m ' type image data with the total amount of SUM ' and SUM ' to obtain network test recognition rates Acc (SUM ') and Acc (SUM '), and if Acc (SUM ') < Acc (SUM '), indicating that the cleaned data is more favorable for the classification training of the convolutional neural network;
randomly or manually extracting certain data test from m 'data with the total amount of SUM' as a public test set, taking data with the test parts removed from the SUM 'and SUM' as training sets, and performing convolutional neural network training in the same way to obtain network test recognition rates Acc (SUM ') and Acc (SUM'); if Acc (SUM ') < Acc (SUM'), it means that for the same test set, the convolutional neural network trained by using the cleaned data as the training set has stronger generalization capability and higher test recognition rate, i.e. higher data quality.
Drawings
FIG. 1 is a flow chart of the overall concept of the experiment.
FIG. 2 is a graph of initial data set conditions and their convolutional neural network test recognition rates.
FIG. 3 is a schematic diagram of a current dataset self-identifying array.
FIG. 4 is a graph of data set conditions after washing out low quality images and their convolutional neural network test recognition rate results.
FIG. 5 is a graph showing the results of a few classes of analysis and class cleansing performed on data for the first time.
FIG. 6 is a graph of data set conditions after a few classes have been cleaned at one time and their convolutional neural network test recognition rates.
FIG. 7 is a graph of the results of a second few class analysis of the data and class cleaning.
FIG. 8 is a graph of data set conditions after a second cleaning of minority classes and their convolutional neural network test recognition results.
Fig. 9 is a graph showing comparison results of data quality evaluation for each of the classes before and after washing.
FIG. 10 is a graph of the results of a convolutional neural network training using the same test set, before and after data cleansing, as the training set.
Detailed Description
The invention is further described with reference to the accompanying drawings and specific embodiments:
1. downloading plant and flower image data from the Internet in batches, and sorting to obtain 775 classified total 161015 images, wherein the number of images contained in the i-th class is Ni(i=1,2,3…M);
2. Training a convolutional neural network by using the obtained image data set, and specifically comprising the following steps:
a) acquiring a network model file of AlexNet on pythoncaffe, and acquiring a pre-training model file of AlexNet on ImageNet, wherein the pre-training model file is used for initializing a convolutional neural network;
b) randomly taking 144921 images in total of 90% of data in the image data set as a training set, taking 16067 images in 10% of the data in the image data set as a test set, carrying out convolutional neural network training by using caffe, and after iterating 10000 times, obtaining the network test recognition rate of 39%;
c) constructing the ith type image in the data set with the length of NiOne-dimensional image self-identification array Ki(i ═ 1,2,3 … M), the specific steps are as follows:
i. identifying the initial image data sets one by using the trained convolutional neural network;
performing the following processing on the pseudo probability recognition result of the jth graph of the ith class:
1) if the ith class does not exist in the first 10 pseudo probability identification results returned by the convolutional neural network, recording Kij=0;
2) If the ith class exists in the returned first 10 pseudo probability recognition results and the probability is p, recording Kij=p;
d) Analyzing the self-recognition array, and cleaning the low-quality image data of the i-type data:
i. calculating the average value of the self-recognition rate of the ith class of images:
Figure GDA0002528035450000051
calculating the standard deviation of the self-recognition rate of the ith type of image:
Figure GDA0002528035450000052
calculating a cut-off value SepVal- μ - σ α for the number of "minority" images (α -1 in this experiment);
and iv, deleting the image with the recognition rate lower than SepVal in the i-th type image as a low-quality image.
e) After cleaning, 79198 graphs are left, about 90% of 71298 is taken as a training set, the remaining 7900 graphs are taken as a test set, and the convolutional neural network training is carried out by the same method again to obtain that the test recognition rate is 59.7%, which is slightly improved compared with the initial 39%;
f) re-counting the number of 775 images after cleaningIs recorded as N'iTo N'iThe minority classes are analyzed and cleaned to reduce the impact of low quality data classes on the classification network:
i. calculate the average of the current number of M775 class images:
Figure GDA0002528035450000053
calculate the standard deviation of the current M-775 class number of images:
Figure GDA0002528035450000054
calculating a cut-off value SepVal- μ - σ α for the number of "minority" images (α -1 in this experiment);
after the low-quality noise image is cleaned, counting 178 classes which are lower than the number of the SepVal images, and 1815 images in total; since 178/775 is much larger than 1815/79198, these classes are judged to be a few low quality data classes, which are washed away;
g) 70000 pieces of image data after being cleaned are taken as a training set, the rest 7383 pieces of image data are taken as a test set, and the convolutional neural network training is carried out by the same method again, so that the test recognition rate is 60.0 percent, which is slightly improved than that of the previous network;
h) according to the condition of the obtained data set, cleaning a few classes again, and obtaining 468 classes of the cleaned data;
i) the performance of a total of 70755 image data for the 468 classes remaining after washing was evaluated as follows:
i. obtaining all data of the 468 classes in the original data, wherein the total number of the data is 111290;
carrying out convolutional neural network training on 468 class data with the total amount of 111290 and 70755 by the same method respectively to obtain network test recognition rates of 60.8% and 62.6%, which indicates that the cleaned data is more favorable for the classification training of the convolutional neural network;
randomly extracting 10% of 468 data with total amount of 70755 as a common test set test, training the convolutional neural network by using 111290 and 70755 data without test as a training set, and obtaining network test recognition rates of 59.6% and 62.6%, respectively. This shows that, for the same test set, the cleaned data network has stronger generalization capability and higher average accuracy, i.e. better data performance.
The experimental results show that:
1. the data cleaning effect is real and effective, and the quality of the data is improved compared with that of original data according to the same evaluation method.
2. The next cleaning strategy can be determined according to the current data set condition, and the method is flexible.
3. Under the condition that the test sets are the same, the recognition rate of the convolutional neural network obtained by the cleaned data is higher, and the quality of the cleaned data is improved.
The above examples are only used to illustrate the present invention, and do not limit the technical solutions described in the present invention. Therefore, all technical solutions and modifications that do not depart from the spirit and scope of the present invention should be construed as being included in the scope of the appended claims.

Claims (1)

1. A low-quality classified image data cleaning method based on a computer is characterized in that: the method comprises the following steps of a) downloading image data with labels from the Internet in batches, and sorting to obtain an image data set DataSet0 with M types in total, wherein the number of images contained in the ith type is Ni,i=1,2,3…M;
b) Training a convolutional neural network CNN0 by using DataSet0, and specifically comprising the following steps:
i. constructing a convolutional neural network model, and fixing the structure of the network model to be unchanged;
randomly taking a certain proportion of the DataSet0 as a training set of the convolutional neural network;
using the part of the untrained set in the DataSet0 as a test set of the convolutional neural network;
training CNN0, and recording the network test recognition rate as Acc0 after iterating to the specified times;
c) in DataSet0, the length N is constructed for the ith class of imagesiOne-dimensional image self-identification array KiThe method comprises the following specific steps:
i. the CNN0 is used to identify the image data of DataSet0, and the pseudo probability of identifying the jth image of the ith class as the kth class is recorded as pijkK is 1,2,3 … M, and these pseudo probabilities are ordered from large to small;
if K ═ i exists in the top L pseudo probabilities after sorting, then the self-recognition rate K is recordedij=pijkOtherwise, recording Kij=0;
d) Analyzing self-identifying array KiCleaning the low-quality part in the ith type image data:
i. calculating the average value of the self-recognition rate of the ith class of images:
Figure FDA0002528035440000011
calculating the standard deviation of the self-recognition rate of the ith type of image:
Figure FDA0002528035440000012
calculating a boundary value SepVal of the i-th type image with low recognition rate mu-sigma α, wherein 1 is not less than α and not more than 10 and is an integer, and the SepVal is more than 0;
in class i images, if K is presentij<SepVal, then wash out the j picture; obtaining a data set DataSet1 after cleaning;
e) performing convolutional neural network training again in the same way by using DataSet1 to obtain a network test recognition rate Acc1, recording and comparing the network test recognition rate with Acc0 and confirming whether cleaning is effective or not;
f) in DataSet1, counting the number of ith class images again, and recording the number of each class image as N'iTo N'iA few classes were analyzed and cleaned to reduce the impact of low quality data classes on the convolutional neural network:
i. calculating the average value of the number of current M types of images:
Figure FDA0002528035440000021
ii, meterCalculating the standard deviation of the number of the current M types of images:
Figure FDA0002528035440000022
calculating a cut-off value SepVal ═ μ - σ α for the "minority" number of images, 1 ≦ α ≦ 10 and is an integer, and SepVal > 0;
counting M classes of classes with the number of classes lower than SepVal in the M class image;
v, recording the SUM of the M types of quantity as SUM, and recording the SUM of the M types of quantity as SUM;
if M/M is far more than SUM/SUM, judging that the M class is a few classes and needing to be cleaned; if the M/M is close to the SUM/SUM, the M types of numbers are considered to be normal, and cleaning treatment is not needed;
g) performing convolutional neural network training again in the same way by using the cleaned data set DataSet2 to obtain a network test recognition rate Acc2, recording and comparing the network test recognition rate with Acc1 and confirming whether cleaning is effective or not;
h) according to the condition of the obtained data set, repeating the steps d) and f) to obtain the cleaned data categories M 'class, M' < M; i) the quality of the m 'class total sum' image data remaining after cleaning was evaluated:
i. obtaining all data of the m 'class in DataSet0, wherein the total quantity is SUM', SUM '> SUM';
ii, performing convolutional neural network training in the same way on m ' type image data with the total amount of SUM ' and SUM ' to obtain network test recognition rates Acc (SUM ') and Acc (SUM '), and if Acc (SUM ') < Acc (SUM '), indicating that the cleaned data is more favorable for the classification training of the convolutional neural network;
randomly or manually extracting certain data test from m 'data with the total amount of SUM' as a public test set, taking data with the test parts removed from the SUM 'and SUM' as training sets, and performing convolutional neural network training in the same way to obtain network test recognition rates Acc (SUM ') and Acc (SUM'); if Acc (SUM ') < Acc (SUM'), it means that for the same test set, the convolutional neural network trained by using the cleaned data as the training set has stronger generalization capability and higher test recognition rate, i.e. higher data quality.
CN201710665692.5A 2017-08-07 2017-08-07 Low-quality classified image data cleaning method based on computer Active CN107423815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710665692.5A CN107423815B (en) 2017-08-07 2017-08-07 Low-quality classified image data cleaning method based on computer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710665692.5A CN107423815B (en) 2017-08-07 2017-08-07 Low-quality classified image data cleaning method based on computer

Publications (2)

Publication Number Publication Date
CN107423815A CN107423815A (en) 2017-12-01
CN107423815B true CN107423815B (en) 2020-07-31

Family

ID=60436570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710665692.5A Active CN107423815B (en) 2017-08-07 2017-08-07 Low-quality classified image data cleaning method based on computer

Country Status (1)

Country Link
CN (1) CN107423815B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108055529A (en) * 2017-12-25 2018-05-18 国家电网公司 Electric power unmanned plane and robot graphics' data normalization artificial intelligence analysis's system
CN108052925B (en) * 2017-12-28 2021-08-03 江西高创保安服务技术有限公司 Intelligent management method for community personnel files
CN108334895B (en) * 2017-12-29 2022-04-26 腾讯科技(深圳)有限公司 Target data classification method and device, storage medium and electronic device
CN108830294A (en) * 2018-05-09 2018-11-16 四川斐讯信息技术有限公司 A kind of augmentation method of image data
CN108596338A (en) * 2018-05-09 2018-09-28 四川斐讯信息技术有限公司 A kind of acquisition methods and its system of neural metwork training collection
CN108875821A (en) 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
CN109190666B (en) * 2018-07-30 2022-04-29 北京信息科技大学 Flower image classification method based on improved deep neural network
RU2732895C1 (en) * 2019-05-27 2020-09-24 Общество с ограниченной ответственностью "ПЛАТФОРМА ТРЕТЬЕ МНЕНИЕ" Method for isolating and classifying blood cell types using deep convolution neural networks
CN113033694B (en) * 2021-04-09 2023-04-07 深圳亿嘉和科技研发有限公司 Data cleaning method based on deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106649610A (en) * 2016-11-29 2017-05-10 北京智能管家科技有限公司 Image labeling method and apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106649610A (en) * 2016-11-29 2017-05-10 北京智能管家科技有限公司 Image labeling method and apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Learning from massive noisy labeled data for image classification;Tong Xiao;《IEEE》;20150612;全文 *
基于医疗数据挖掘的在线病情分析系统研究与开发;许杰;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140515;第4章 *
数据清洗技术在文本挖掘中的应用;李明;《中国优秀硕士学位论文全文数据库 信息科技辑》;20081115;全文 *
海量数据挖掘过程相关技术研究进展;米允龙;《计算机科学与探索》;20150630;全文 *

Also Published As

Publication number Publication date
CN107423815A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
CN107423815B (en) Low-quality classified image data cleaning method based on computer
CN107526785B (en) Text classification method and device
CN109345508B (en) Bone age evaluation method based on two-stage neural network
CN108334936B (en) Fault prediction method based on migration convolutional neural network
CN105701480B (en) A kind of Video Semantic Analysis method
CN109993100B (en) Method for realizing facial expression recognition based on deep feature clustering
CN109389171B (en) Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology
CN108171318B (en) Convolution neural network integration method based on simulated annealing-Gaussian function
CN111563533A (en) Test subject classification method based on graph convolution neural network fusion of multiple human brain maps
CN111582396B (en) Fault diagnosis method based on improved convolutional neural network
CN113673482B (en) Cell antinuclear antibody fluorescence recognition method and system based on dynamic label distribution
CN111009324A (en) Mild cognitive impairment auxiliary diagnosis system and method based on brain network multi-feature analysis
CN111160392A (en) Hyperspectral classification method based on wavelet width learning system
CN114743037A (en) Deep medical image clustering method based on multi-scale structure learning
CN114391826A (en) Human characterization prediction method and device based on edge-driven graph neural network
CN115909011A (en) Astronomical image automatic classification method based on improved SE-inclusion-v 3 network model
Akut et al. NeuroEvolution: Using genetic algorithm for optimal design of deep learning models
CN114037014A (en) Reference network clustering method based on graph self-encoder
Chen et al. Dropcluster: A structured dropout for convolutional networks
Dhawan et al. Deep Learning Based Sugarcane Downy Mildew Disease Detection Using CNN-LSTM Ensemble Model for Severity Level Classification
CN113887559A (en) Brain-computer information fusion classification method and system for brain off-loop application
CN117649657A (en) Bone marrow cell detection system based on improved Mask R-CNN
CN117058079A (en) Thyroid imaging image automatic diagnosis method based on improved ResNet model
CN116912576A (en) Self-adaptive graph convolution brain disease classification method based on brain network higher-order structure
CN115545086B (en) Migratable feature automatic selection acoustic diagnosis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant