CN109871855A - A kind of adaptive depth Multiple Kernel Learning method - Google Patents

A kind of adaptive depth Multiple Kernel Learning method Download PDF

Info

Publication number
CN109871855A
CN109871855A CN201910139959.6A CN201910139959A CN109871855A CN 109871855 A CN109871855 A CN 109871855A CN 201910139959 A CN201910139959 A CN 201910139959A CN 109871855 A CN109871855 A CN 109871855A
Authority
CN
China
Prior art keywords
kernel function
kernel
function
base
chaos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910139959.6A
Other languages
Chinese (zh)
Other versions
CN109871855B (en
Inventor
任胜兵
沈王博
李游
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201910139959.6A priority Critical patent/CN109871855B/en
Publication of CN109871855A publication Critical patent/CN109871855A/en
Application granted granted Critical
Publication of CN109871855B publication Critical patent/CN109871855B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

For depth Multiple Kernel Learning (DMKL) method compared with shallow-layer Multiple Kernel Learning, efficiency and validity are higher, cause extensive concern.However, existing DMKL architecture generalization is poor, it is difficult to find suitable parameter according to sample data training.The invention proposes a kind of structure of adaptive depth Multiple Kernel Learning (SA-DMKL), this structure can grow adaptively, certainly and have elasticity.By Rademacher chaos complexity, every layer of base kernel function can be screened according to different data set and data space, change every layer of base kernel function quantity.UCI flag flower data set data set, breast cancer data set and Caltech256 image data set are verified with the method for the present invention, is compared with other methods, there is higher validity.

Description

A kind of adaptive depth Multiple Kernel Learning method
Technical field
The invention belongs to machine learning fields, and in particular to a kind of adaptive depth Multiple Kernel Learning method.
Background technique
Kernel function is used to be promoted the dimension of data, and different core can promote data to different higher dimensional spaces even nothing Limit dimension.People have benefited from the development and application of support vector machines (SVM) theory to the concern of kernel method, the use of kernel function so that Linear SVM is easily generalized to non-linear SVM.However, these methods are all based on the monokaryon method in single feature space.Due to not Same kernel function has different characteristics, and difference is very big in different applications for the performance of kernel function, for the construction of kernel function With the no perfect theoretical foundation of selection.In order to solve these problems, there is the research much about kernel function combined method, i.e., it is more Core study.Some coenocytism numbers of plies are seldom, referred to as shallow-layer multicore model, and this multicore model is not in many cases, Precision is improved, therefore people are highly desirable to the sufficiently high coenocytism of precision.
With the popularization of deep learning concept, people start to use for reference a kind of structure in Multiple Kernel Learning --- depth multicore It practises.Many model structures are suggested.However, there is no very high generalization abilities for these models, also without extra high identification Precision.And these model structures have been fixed, cannot be from growing, so that being difficult to reach best for different data set models Required precision.In order to solve problem above, the invention proposes a kind of adaptive depth Multiple Kernel Learning methods.
Summary of the invention
The present invention proposes a kind of adaptive depth Multiple Kernel Learning method (SA-DMKL).The structure can automatic increase and Adjust the parameter of each core.As traditional depth Multiple Kernel Learning, each layer of SA-DMKL includes several base cores, they Nuclear parameter be adjusted by grid data service.In terms of framework, SA-DMKL can according to the accuracy of identification of entire model from It is dynamic to increase the number of plies.Therefore, our model is not very sensitive to the parameter setting of each candidate kernel function in structure, without too More hyper parameters.In this way, our model can be suitble to predict different data sets, generalization ability is improved.Radermacher chaos is multiple Miscellaneous degree is used to assess the ability that a sample space is mapped to another sample space by mapping function.SA-DMKL is according to input Sample set calculate the Radermacher chaos complexity of each layer of base kernel function, abandon the core that its value is greater than threshold value.In this way, Each layer of kernel function combination can achieve the smallest complexity, to improve the generalization ability of model.
Specific method the following steps are included:
Step 1 data prediction
In image procossing, before carrying out image feature extraction, segmentation and matching, it is sometimes desirable to be located in advance to input picture Reason.The main purpose of image preprocessing is to eliminate information unrelated in image, restores useful real information, enhancing is for information about Detectability and to the maximum extent simplify data, to improve the reliability of feature extraction, image segmentation, matching and identification. Preprocessing process generally has the methods of digitlization, geometric transformation, normalization, smooth, recovery and enhancing.If in experiment of the invention Input data is image, then is become with HOG (Histogram of Oriented Gradient), resize or fast Fourier It changes FFT (fast Fourier transform) and carries out image preprocessing.
Histograms of oriented gradients HOG feature be it is a kind of in computer vision and image procossing be used to carry out object detection Feature Descriptor, it is by calculating the gradient orientation histogram with statistical picture regional area come constitutive characteristic;Resize method It is the certain row and column of abstract image;FFT is the fast algorithm of Discrete Fourier Transform, and a signal can be transformed to frequency Domain.Some signals are difficult what feature is found out in the time domain, but if are just readily seen spy after transforming to frequency domain It levies.Through these methods, treated that picture is shown in Fig. 1
Picture can also be directly inputted into model without Feature Engineering.
Step 2 determines base kernel function type
In Multiple Kernel Learning model, in the same sample space, it would be desirable to select us according to certain rule Kernel function.There are many kernel function types, and there are commonly linear kernels, polynomial kernel, Radial basis kernel function, tanh core, anti- Trigonometric function core, power exponent core, laplace kernel, variance analysis core, Quadratic Rational core, polynary secondary core, inverse polynary secondary core, Sigmoid core etc..
In view of there is the identical function of many basic structures in existing kernel function type, such as polynomial kernel k (x, y)= (axTy+c)dIt is linear kernel k (x, y)=xTThe general-purpose type of y+c, laplace kernelWith power exponent CoreIt is closely similar.The kernel function Laplce that the present invention selects basic structure completely inconsistent Core, tanh core core, radial base core core, antitrigonometric function core core and polynomial kernel core are combined, these kernel functions pass through The adjustment of parameter may include mostly common kernel function.
Step 3 successively determines kernel functional parameter
With the selection of kernel function type, also without the universal method for instructing kernel functional parameter to choose.Kernel functional parameter The method of determination has based on grid-search algorithms, the cross validation for traversing all parameter combinations, genetic algorithm, particle swarm algorithm, mould Quasi- annealing algorithm, drosophila optimization algorithm, gravity searching algorithm etc.;The present invention successively calculates core letter using the method for grid search Number parameter.Grid data service is a kind of exhaustive search method of specified parameter value, by the way that the parameter of estimation function is passed through intersection The method of verifying optimizes to obtain optimal learning algorithm.The possible value of parameters is subjected to permutation and combination, column All possible combined result generates " grid " out.Then by each group share in SVM training, and using cross validation to show into Row assessment.After having attempted all parameter combinations in fitting function, a suitable classifier is returned, is automatically adjusted to most preferably join Array is closed.
Each layer of model is all made of identical base kernel function in the present invention before being screened, but parameter is different.
Step 4 screens every layer of kernel function with Radermacher chaos complexity
Empirical risk minimization and monokaryon support vector machines can be carried out using Rademacher chaos complexity circle excellent Change.The value of Radermacher chaos complexity is bigger, and the generalization ability of kernel function is poorer, still need to select as far as possible The lesser kernel function of Radermacher chaos complexity.It is complicated in the Radermacher chaos for calculating every layer of each kernel function After degree, if its value is greater than the threshold value that we set, we abandon this kernel function, its processing result does not enter back into next layer.
Give a sample training collection { (xi, yi) | i=1..., n }, wherein xi∈RdIt is input vector, y ∈ { -1,1 } is xi, tag along sort, each input xiWith kernel function K (xi, xj)=φ (xi)′·φ(xj) it is mapped to a Gao Weixier Bert space.Later, support vector machines study has the hyperplane of maximum class interval and minimum training error on training set. This hyperplane is defined as:
Wherein αiIt is a parameter related with training set, b is a bias term.
Its Rademacher chaos complexity for corresponding to sample space is calculated to every layer of base kernel function, if its value is greater than 10000 base kernel function is abandoned, and result does not enter back into next layer.Rademacher chaos complexity calculating method is as follows:
The extensive error ε of SVMφ(f):
εφ(f)=∫ ∫X×Yφ (yf (x)) dp (x, y)
X, y are variables, respectively indicate the label value of a sample and it, by all samples and its label value, composition input Space X, Y, p (x, y) indicate the distribution situation of sample, and φ (x) is loss function, and f (x) is the mapping function of x.
SVM experience error
Wherein n is number of samples, N={ 1,2,3 ... n }, j ∈ N.
Allow φ indicate normalization Classification Loss, for all δ ∈ (0,1), under at least probability of 1- δ, meet it is following not Equation:
It is exactly the Radermacher chaos complexity for being used to calculate each kernel function complexity on the right of inequality.Wherein λ is Regularization coefficient, when kernel function is Gaussian kernel,When kernel function is its non-gaussian core type When kernel function,It is local Lipschitz constant,Sup indicates supremum, and k indicates a kernel function, and K is the kernel function of all kernel function compositions Collection, K (x, x) indicate inner product of the sample x based on kernel function k, and e is the truth of a matter of natural logrithm, and m indicates the number of base kernel function.
It is equal to:
It is equal to:
λ is the parameter of common two layers of core learning structure minimization problem,(t) the continuous benefit being defined on positive real number collection Pu Xici function.The kernel function left after upper one layer of screening exports composition characteristic vector as next layer of input.Kernel function Input feature value is subjected to spatial alternation, the feature vector of input exports a value, each common group of value by each SVM At a feature vector as next layer of input.It is not provided with activation primitive after every layer of kernel function, only in last output core Activation primitive is set after function.
Step 5 is automatically stopped growth
In the training process, if the Rademacher chaos complexity of a kernel function keeps non-with the increase of the number of plies Often high value, then this core cannot play a role in next layer.If highest accuracy rate is at us within 4 layers It remains unchanged or the number of iterations reaches 30 times when model training, training just finishes.
Compared with prior art, the present invention having the following advantages and beneficial effects:
The invention proposes a kind of adaptive depth Multiple Kernel Learning method SA-DMKL.Each layer of SA-DMKL model all by Identical base kernel function composition, but parameter is different.In order to overcome common DMKL generalization ability is poor, recognition accuracy is low to lack Point, we give a mark at the core using Rademacher chaos complexity to each layer, and by Rademacher chaos complexity High core is rejected, to improve accuracy rate.By example, it can be seen that, the Rademacher chaos complexity and model of core are smart Degree has relationship.The Rademacher chaos complexity of some cores is maintained at a higher level, their recognition accuracy can Can it is very low, while these consideration conveys change after input data can become more to be difficult to differentiate between.Abandon these " bad " kernel it Afterwards, the accuracy of test will improve.Finally, we make model certainly in order to reach the upper limit of model generalization ability as far as possible Increase, until in continuous 4 layers model keep optimum precision it is constant, or be greater than maximum number of iterations 30, model just stop give birth to It is long.Showing SA-DMKL method by the test on 7 UCI data sets, wherein five data sets performances are most excellent, such as Sonar data set reaches 89.42% accuracy rate, and the SMlMKL method than being number two is nearly 5 percentage points high.In addition Although performance is not accuracy rate that is best, but also respectively reaching 91.92% and 82.03% in two datasets.
Detailed description of the invention
Fig. 1 is a kind of adaptive depth Multiple Kernel Learning method overview flow chart that the present invention designs;Wherein m indicates candidate Kernel function quantity, Km=(a, b, c ...) indicate the parameter of each kernel function, and iter indicates maximum number of iterations, and Em indicates minimum Extensive error, l indicate the constant tolerance number of plies of model accuracy;RT indicates Rademacher chaos complexity threshold;
Fig. 2 is a kind of adaptive depth Multiple Kernel Learning method model structure chart that the present invention designs.
Specific implementation method
For the adaptive depth Multiple Kernel Learning method of more detailed description one kind proposed by the present invention, below with reference to attached The present invention is further illustrated for figure and specific implementation.Model overall structure is shown in Fig. 2.
Step 1 data prediction
This example is tested on 7 UCI data sets and image data set Caltech 256.UCI data set it is good Property class label be+1, pernicious class label be -1.For training classifier, 40% trained 60% image for testing in test Classifier, these images are selected at random for trained and test phase.In order to compare proposition SA-DMKL model and other The performance of deep nuclear model gives correct classification samples and accounts for the ratio of total sample number, the Rademacher chaos of each layer kernel function Complexity and supporting vector number.Accuracy rate calculation method is as follows:
Wherein TP is the positive class quantity correctly classified, and TN is the negative class quantity correctly classified, and N is total sample in test set Number.
In order to assess performance of the proposed SA-DMKL model for classification task, We conducted one group of experiments.We The UCI data set of 7 real worlds: iris, Liver, Breast, Sonar, Australian is tested with different methods, German,Monk.Table 1 is described in detail used data set table 1 and tests the 7 UCI data sets used
Caltech-256 is an images steganalysis data set, includes 30608 width images and 256 classifications, each class Not at least 80 width images and 827 width images.In experiment of the invention, we used 4 for Caltech256 image data set A classification: " baseball bat ", " ak47 ", " national flag of United States ", " dirigible ".Before our model of training, we do some features It extracts, we used tri- kinds of Hog, FFT, simple resize different methods here to do this work;
Step 2 determines kernel function type
Example select basic structure completely inconsistent kernel function laplace kernel, tanh core core, radial base core core, Antitrigonometric function core core and polynomial kernel core are combined, these kernel functions may include most of normal by the adjustment of parameter Use kernel function;
Step 3 successively determines base kernel functional parameter
Base kernel functional parameter is determined successively to be calculated using grid data service;
Step 4 screens every layer of kernel function with Radermacher chaos complexity
Our special exhibitions details of the SA-DMKL in breast cancer and flag flower data set, such as table 2, (the A table of table 3 Show average test precision, S indicates that SV number, R indicate Rademacher chaos complexity), to analyze the validity of SA-DMKL and excellent Gesture.
2 SA-DMKL of table is on breast cancer data set
3 SA-DMKL of table is on flag flower data set
As can be seen from Table 2 when model reaches third layer, accuracy rate is begun to decline, and starts to restore after third time.SA- The accuracy rate of DMKL can reach peak value in certain layer.Continuation can be explained easily according to table 2, due to certain cores Rademacher chaos complexity is very high, and the recognition capability of model is very poor, but when we remove these cores from these layers When, the recognition capability of model is just restored.
Table 2 and table 3 all show the quantity of supporting vector under SA-DMKL, there it can be seen that removing those After the high kernel function of Rademacher chaos complexity, the supporting vector number of remaining kernel function can drop to stable state, and keep Compared with minor swing.We may safely draw the conclusion (a): from our model as can be seen that some kernel functions supporting vector number with The increase of the number of plies and change, the supporting vector number of some kernel functions can occur significantly to change, and slight fluctuation finally occur. The result shows that our sample becomes increasingly divide with the variation and screening of sample space.
In conjunction with table 2, table 3 as can be seen that the Rademacher chaos complexity and model accuracy of core have relationship.Some cores Rademacher chaos complexity is maintained at a higher level, their recognition accuracy may be very low, while these Input data after consideration convey changes can become more to be difficult to differentiate between.After the kernel for abandoning these " bad ", the accuracy of test will It can improve.In addition it will be seen that some Rademacher chaos complexities are higher than other kernel functions, and as the number of plies becomes Change does not also change, they belong to " corpse kernel function ", adjusts their parameter or the dimensional space of input sample anyway, The classification capacity of sample set will not change.
Step 5 is automatically stopped growth
If the highest accuracy rate of model remains unchanged in continuous 4 layers in training, or whole training the number of iterations reaches To 30 times or more, training is automatically stopped.
In order to further compare the performance of SA-DMKL He other models, we have evaluated following algorithm: SKSVM, L2MKL, SM1MKL, DMKL and MLMKL.Table 4 gives the detailed results of algorithms of different classification.There it can be seen that SA-DMKL method with Other deep core learning models are compared, and have better accuracy of identification on most of data sets.SA-DMKL model first layer simultaneously Better accuracy is not shown, certainly, it is related that this may randomly choose training data with us.But with the increasing of the number of plies Add, it may be seen that the accuracy rate of model is gradually increased, especially tanh core and radial base core kernel function is better than conventional Deep core learning model, value can be optimal in one layer.
Classification results of 4 algorithms of different of table on UCI data set
In order to seek whether image preprocessing will affect the performance of SA-DMKL, we used Hog, FFT, simply 3 kinds of feature extracting methods such as resize are tested in Caltech image data set.From table 5, table 6 and table 7 as can be seen that not Same feature extracting method has no effect on recognition capability, and the recognition capability of three models is essentially identical, with the model ratio of FFT method Other models perform better than, and recognition capability reaches 0.83.However, from these three tables it can also be seen that although different features mentions It takes and will not influence recognition capability, but will affect number of plies when model reaches maximum accuracy rate.Thus we may safely draw the conclusion (c): different feature extractions will not influence recognition capability, but will affect number of plies when model reaches maximum accuracy rate.
5 SA-DMKL of table is in 256 data set of Caltech handled with Hog method
6 SA-DMKL of table is in 256 data set of Caltech handled with FFT method
7 SA-DMKL of table is in 256 data set of Caltech handled with resize method

Claims (3)

1. a kind of adaptive depth Multiple Kernel Learning method, it is characterised in that the following steps are included:
Step 1 data prediction
If input data is image, image preprocessing is carried out with Feature Engineering algorithm, or directly by original image input model;
Step 2 determines kernel function type
Different kernel functions have different classification capacities on different sample data sets, it is contemplated that have in existing kernel function type The identical function of many basic structures, the kernel function for selecting basic structure completely inconsistent are combined as base kernel function, this A little base kernel functions can be comprising common kernel function by the adjustment of parameter, and every layer of base kernel function type and quantity is all identical;
Step 3 successively determines base kernel functional parameter
Base kernel functional parameter is determined successively to be calculated using grid data service;
Step 4 screens every layer of kernel function with Radermacher chaos complexity
Rademacher chaos complexity carries out monokaryon SVM optimization by empirical risk minimization, and Rademacher chaos is multiple The value of miscellaneous degree is related with input data with kernel function, and the value of Rademacher chaos complexity is bigger, the generalization ability of kernel function Kernel function that is poorer, selecting Radermacher chaos complexity small;Calculating every layer of each base kernel function After Radermacher chaos complexity, if its value is greater than 10000, this base kernel function is abandoned, the output valve of the base kernel function is not Enter back into next layer;The kernel function left after screening is exported into composition characteristic vector as next layer of input, next layer of core Upper one layer of feature vector is carried out spatial alternation by function, and SVM classifier classifies the data after spatial alternation, each feature Vector exports a value by each SVM, and the value of the SVM output stayed after screening forms a feature vector as next The input of layer;
Step 5 is automatically stopped growth
If the highest accuracy rate of model remains unchanged in continuous 4 layers in training, or whole training the number of iterations reaches 30 More than secondary, training is automatically stopped.
2. a kind of adaptive depth Multiple Kernel Learning method as described in claim 1, it is characterised in that:
The selection principle of every layer of base kernel function is that selection can obtain the base kernel function of common kernel function by combining, and selects base The completely inconsistent laplace kernel of this structure, tanh core, polynomial kernel, antitrigonometric function core and radial base core carry out group It closes.
3. a kind of adaptive depth Multiple Kernel Learning method as described in claim 1, it is characterised in that: to every layer of base kernel function Its Rademacher chaos complexity for corresponding to sample space is calculated, if base kernel function of its value greater than 10000 is abandoned, result Next layer is not entered back into, and Rademacher chaos complexity calculating method is as follows:
The extensive error ε of SVMφ(f):
εφ(f)=∫ ∫X×Yφ (yf (x)) dp (x, y)
X, y are variables, respectively indicate the label value of a sample and it, by all samples and its label value, form the input space X, Y, p (x, y) indicate the distribution situation of sample, and φ (x) is loss function, and f (x) is the mapping function of x;
SVM experience error
Wherein n is number of samples, N={ 1,2,3 ... n }, j ∈ N;
It allows φ to indicate normalization Classification Loss, following inequality is met under at least probability of 1- δ for all δ ∈ (0,1):
It is exactly the Radermacher chaos complexity for being used to calculate each kernel function complexity on the right of inequality;Wherein λ is canonical Term coefficient, when kernel function is Gaussian kernel,When kernel function is the core letter of its non-gaussian core type When number, It is local Lipschitz constant, Sup indicates supremum, and k indicates a kernel function, and K is the kernel function collection of all kernel function compositions, and K (x, x) indicates that sample x is based on The inner product of kernel function k, e are the truth of a matter of natural logrithm, and m indicates the number of base kernel function.
CN201910139959.6A 2019-02-26 2019-02-26 Self-adaptive deep multi-core learning method Expired - Fee Related CN109871855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910139959.6A CN109871855B (en) 2019-02-26 2019-02-26 Self-adaptive deep multi-core learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910139959.6A CN109871855B (en) 2019-02-26 2019-02-26 Self-adaptive deep multi-core learning method

Publications (2)

Publication Number Publication Date
CN109871855A true CN109871855A (en) 2019-06-11
CN109871855B CN109871855B (en) 2022-09-20

Family

ID=66919200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910139959.6A Expired - Fee Related CN109871855B (en) 2019-02-26 2019-02-26 Self-adaptive deep multi-core learning method

Country Status (1)

Country Link
CN (1) CN109871855B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264404A (en) * 2019-06-17 2019-09-20 北京邮电大学 A kind of method and apparatus of super resolution image texture optimization
CN110287180A (en) * 2019-06-25 2019-09-27 上海诚数信息科技有限公司 A kind of air control modeling method based on deep learning
CN110378380A (en) * 2019-06-17 2019-10-25 江苏大学 A kind of image classification method based on the study of multicore Ensemble classifier
CN111582299A (en) * 2020-03-18 2020-08-25 杭州铭之慧科技有限公司 Self-adaptive regularization optimization processing method for image deep learning model identification
CN113762303A (en) * 2020-11-23 2021-12-07 北京沃东天骏信息技术有限公司 Image classification method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107430610A (en) * 2015-02-13 2017-12-01 澳大利亚国家Ict有限公司 Learn from distributed data
US20180165597A1 (en) * 2016-12-08 2018-06-14 Resurgo, Llc Machine Learning Model Evaluation in Cyber Defense

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107430610A (en) * 2015-02-13 2017-12-01 澳大利亚国家Ict有限公司 Learn from distributed data
US20180165597A1 (en) * 2016-12-08 2018-06-14 Resurgo, Llc Machine Learning Model Evaluation in Cyber Defense

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHANG XU ET AL.: "Large-Margin Multi-View Information Bottleneck", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
吕洪艳等: "组合核函数SVM在说话人识别中的应用", 《计算机系统应用》 *
唐静静等: "基于间隔迁移的多视角支持向量机", 《运筹学学报》 *
杨国鹏等: "基于核Fisher判别分析的高光谱遥感影像分类", 《遥感学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264404A (en) * 2019-06-17 2019-09-20 北京邮电大学 A kind of method and apparatus of super resolution image texture optimization
CN110378380A (en) * 2019-06-17 2019-10-25 江苏大学 A kind of image classification method based on the study of multicore Ensemble classifier
CN110378380B (en) * 2019-06-17 2023-09-29 江苏大学 Image classification method based on multi-core integrated classification learning
CN110287180A (en) * 2019-06-25 2019-09-27 上海诚数信息科技有限公司 A kind of air control modeling method based on deep learning
CN111582299A (en) * 2020-03-18 2020-08-25 杭州铭之慧科技有限公司 Self-adaptive regularization optimization processing method for image deep learning model identification
CN111582299B (en) * 2020-03-18 2022-11-01 杭州铭之慧科技有限公司 Self-adaptive regularization optimization processing method for image deep learning model identification
CN113762303A (en) * 2020-11-23 2021-12-07 北京沃东天骏信息技术有限公司 Image classification method and device, electronic equipment and storage medium
CN113762303B (en) * 2020-11-23 2024-05-24 北京沃东天骏信息技术有限公司 Image classification method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109871855B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN109871855A (en) A kind of adaptive depth Multiple Kernel Learning method
Pitchai et al. RETRACTED ARTICLE: Brain Tumor Segmentation Using Deep Learning and Fuzzy K-Means Clustering for Magnetic Resonance Images
CN109902722A (en) Classifier, neural network model training method, data processing equipment and medium
CN111428733B (en) Zero sample target detection method and system based on semantic feature space conversion
Ghosh et al. Unsupervised grow-cut: cellular automata-based medical image segmentation
WO2022126810A1 (en) Text clustering method
WO2018081607A2 (en) Methods of systems of generating virtual multi-dimensional models using image analysis
Sudha et al. Classification of brain tumor grades using neural network
AU2019223959B2 (en) Three-dimensional cell and tissue image analysis for cellular and sub-cellular morphological modeling and classification
CN112001788A (en) Credit card default fraud identification method based on RF-DBSCAN algorithm
Rampun et al. Breast density classification using local ternary patterns in mammograms
Wetteland et al. Multiclass Tissue Classification of Whole-Slide Histological Images using Convolutional Neural Networks.
Sharma et al. A few shot learning based approach for hardware trojan detection using deep siamese cnn
CN109191452B (en) Peritoneal transfer automatic marking method for abdominal cavity CT image based on active learning
CN116582309A (en) GAN-CNN-BiLSTM-based network intrusion detection method
Qiao et al. Lung nodule classification using curvelet transform, LDA algorithm and BAT-SVM algorithm
CN114139482A (en) EDA circuit failure analysis method based on depth measurement learning
CN113627522A (en) Image classification method, device and equipment based on relational network and storage medium
Younas et al. An Efficient Methodology for the Classification of Invasive Ductal Carcinoma Using Transfer Learning
Malarvizhi et al. Feature Linkage Weight Based Feature Reduction using Fuzzy Clustering Method
Duan et al. Bio-inspired visual attention model and saliency guided object segmentation
Shi et al. Mitigating biases in long-tailed recognition via semantic-guided feature transfer
Kishore et al. Novel method for the segmentation of brain images using the Fcm clustering approach as well as rough set
Yadav et al. Brain tumor detection and optimization using hybrid classification algorithm
Jiang et al. Red blood cell detection by the improved two-layer watershed segmentation method with a full convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220920

CF01 Termination of patent right due to non-payment of annual fee