CN115905855A - Improved meta-learning algorithm MG-copy - Google Patents

Improved meta-learning algorithm MG-copy Download PDF

Info

Publication number
CN115905855A
CN115905855A CN202211173214.XA CN202211173214A CN115905855A CN 115905855 A CN115905855 A CN 115905855A CN 202211173214 A CN202211173214 A CN 202211173214A CN 115905855 A CN115905855 A CN 115905855A
Authority
CN
China
Prior art keywords
sample
formula
discriminator
training
learning algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211173214.XA
Other languages
Chinese (zh)
Inventor
毕红亮
张文博
陈艳姣
周超阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202211173214.XA priority Critical patent/CN115905855A/en
Publication of CN115905855A publication Critical patent/CN115905855A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of meta-learning algorithms, in particular to an improved meta-learning algorithm MG-replace, which constructs a basic model; inputting a training sample and a new sample; introducing a Distributed Measurement Strategy (DMS) to optimize the basic model; the base model is optimized using a generative countermeasure network GAN. Compared with the traditional algorithm, the method can solve the problem of low generalization performance of the model based on a small number of training samples, and further can effectively predict the new sample with larger distribution difference.

Description

Improved meta-learning algorithm MG-replay
Technical Field
The invention relates to the technical field of meta-learning algorithms, in particular to an improved meta-learning algorithm MG-copy.
Background
The replay algorithm can perform gradient descent on each task using first order information to solve the Meta-Learning based Model update problem without having to compute two differentials like the Model Agnostic Meta-Learning (MAML) algorithm. That is, the replay algorithm can use less memory to achieve MAML-like performance.
The existing meta-learning based model, although it can learn a new task well under a small sample, also needs enough training samples to build the model, and building the model using a small number of training samples may result in meta-overfitting. In addition, the existing meta-learning-based work generally needs to have a certain similarity between the sample distributions of the training user and the new user, and the generalization capability of the work is weak, that is, when the distribution of the predicted samples is different greatly, the performance of the model constructed by the existing method is reduced.
Disclosure of Invention
The invention aims to provide an improved meta-learning algorithm MG-replay to solve the problem of low generalization performance of a basic model based on a small number of training samples and effectively predict a new sample with larger distribution difference.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides an improved meta-learning algorithm MG-replay, which comprises the following steps:
constructing a basic model;
inputting a training sample and a new sample;
introducing a Distributed Measurement Strategy (DMS) to optimize the basic model;
the base model is optimized using a generative confrontation network GAN.
Preferably, the introducing a distributed measurement policy DMS optimizes the basic model, and specifically includes:
initializing network parameters and external cycle times;
updating the network parameters in an inner loop;
dividing input data of each user into k small batches, performing k iterations, and calculating the maximum average deviation MMD;
increasing the (k + 1) th iteration;
calculating the average of the parameters of all iterations
Figure BDA0003863082120000021
And updates the network initialization parameters in the outer ring.
Preferably, the formula for calculating the maximum mean deviation MMD is as follows:
Figure BDA0003863082120000022
in the formula, x i And y i Two examples are shown on two different minibatches of length m respectively.
Preferably, the formula for updating the network initialization parameter in the outer ring is as follows:
Figure BDA0003863082120000023
in the formula (II)>
Figure BDA0003863082120000024
Represents the update parameter in the ith iteration and epsilon represents the learning rate. />
Preferably, the optimizing the basic model by using the generative countermeasure network GAN specifically includes:
modifying the discriminator, wherein the classification part and the discrimination part share part of the network, and the discrimination part is used for estimating the sample distribution distance between the new user and the training user;
utilizing the classification information of the discriminator to constrain the optimization direction of the model and further calculating the distribution distance on the class level;
predicting the category of the input sample according to the distribution distance on the category level;
and optimizing parameters according to the sample distribution distance between the new user and the training user and the distribution distance on the calculation type level.
Preferably, the calculation formula of the sample distribution distance between the new user and the training user is as follows:
Figure BDA0003863082120000031
in the formula, L dw Is a discriminating part f of the discriminator dw A distance measurement function of f g Representative feature extraction, X s Is a training sample, and the training sample,X t is a new user sample, n s Is the number of training samples, n t Is the number of new user samples, L grad Is the gradient penalty loss function.
Preferably, the calculation formula of the distribution distance on the category hierarchy is as follows:
Figure BDA0003863082120000032
in the formula, L dc Is a distance measure function of the classification part of the discriminator>
Figure BDA0003863082120000033
Is a one-hot encoded vector converted from the true label of the ith training sample, and->
Figure BDA0003863082120000034
Is the predicted output of the discriminator on the ith training sample>
Figure BDA0003863082120000035
And &>
Figure BDA0003863082120000036
Is a one-hot coded vector converted from the predicted output of the discriminator for the real tag and the i-th sample of the new user.
Preferably, the formula for predicting the category of the input sample is:
Figure BDA0003863082120000041
in the formula, L c Is a global classification function, is>
Figure BDA0003863082120000042
Is the predicted output of the ith training sample>
Figure BDA0003863082120000043
Is the predicted output of the ith new user sample.
Preferably, the calculation formula of the optimization parameter is as follows:
Figure BDA0003863082120000044
in the formula, theta c Is a parameter of the ensemble classifier, θ g Is a parameter of the feature extractor, θ dc And theta dw Is a parameter of the discriminator, alpha and gamma are weight parameters, L dw And L dc Is a function of the discriminator, L grad Is the gradient penalty loss function.
Therefore, by adopting the technical scheme provided by the invention, the basic model is optimized by introducing the distributed measurement strategy DMS and using the generative countermeasure network GAN, so that the problem of low generalization performance of the basic model based on a small number of training samples in the existing basic meta-learning model is solved, and the new sample with larger distribution difference can be effectively predicted.
Drawings
FIG. 1 is a schematic structural diagram of an algorithm provided in an embodiment of the present disclosure;
FIG. 2 is a graph comparing experimental results provided in the examples of the present specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides an improved meta-learning algorithm MG-copy, comprising:
constructing a basic model; and finally classifying the extracted one-dimensional characteristic data by a softmax layer by taking the CNN as a basic network of the model.
Inputting a training sample and a new sample;
introducing a Distributed Measurement Strategy (DMS) to optimize the basic model;
the base model is optimized using a generative confrontation network GAN.
The introducing of the distributed measurement strategy DMS optimizes the basic model, and specifically comprises the following steps:
initializing network parameters and outer circulation times;
updating the network parameters in an inner loop;
dividing input data of each user into k small batches, performing k iterations, and calculating the maximum average deviation MMD;
increasing the (k + 1) th iteration;
calculating the average of the parameters of all iterations
Figure BDA0003863082120000051
And updates the network initialization parameters in the outer ring.
The formula for calculating the maximum mean deviation MMD is:
Figure BDA0003863082120000052
in the formula, x i And y i Two examples are shown on two different minibatches of length m respectively.
The formula for updating the network initialization parameters in the outer ring is as follows:
Figure BDA0003863082120000053
in combination with>
Figure BDA0003863082120000054
Represents the update parameter in the ith iteration and epsilon represents the learning rate.
The optimizing the basic model by using the generative countermeasure network GAN specifically comprises:
modifying the discriminator, wherein the classification part and the discrimination part share part of the network, and the discrimination part is used for estimating the sample distribution distance between the new user and the training user;
utilizing the classification information of the discriminator to constrain the optimization direction of the model and further calculating the distribution distance on the class level;
predicting the category of the input sample according to the distribution distance on the category level;
and optimizing parameters according to the sample distribution distance between the new user and the training user and the distribution distance on the calculation type level.
The calculation formula of the sample distribution distance between the new user and the training user is as follows:
Figure BDA0003863082120000061
in the formula, L dw Is a discriminating part f of the discriminator dw A distance measurement function of f g Representation feature extraction, X s Is a training sample, X t Is a new user sample, n s Is the number of training samples, n t Is the number of new user samples, L grad Is the gradient penalty loss function.
The calculation formula of the distribution distance on the category level is as follows:
Figure BDA0003863082120000062
in the formula, L dc Is a distance measure function of the classification part of the discriminator>
Figure BDA0003863082120000063
Is a one-hot encoded vector converted from the true label of the ith training sample, and->
Figure BDA0003863082120000064
Is the predicted output of the discriminator on the i-th training sample>
Figure BDA0003863082120000065
And &>
Figure BDA0003863082120000066
Is a conversion from the predicted output of the discriminator of the genuine tag and the i-th sample of the new userAnd the derived one-hot coded vector.
The calculation formula of the category of the prediction input sample is as follows:
Figure BDA0003863082120000071
in the formula, L c Is a global classification function, is>
Figure BDA0003863082120000072
Is the predicted output of the ith training sample>
Figure BDA0003863082120000073
Is the predicted output of the ith new user sample.
The calculation formula of the optimization parameters is as follows:
Figure BDA0003863082120000074
in the formula, theta c Is a parameter of the ensemble classifier, θ g Is a parameter of the feature extractor, θ dc And theta dw Is a parameter of the discriminator, alpha and gamma are weight parameters, L dw And L dc Is a function of a discriminator, L grad Is the gradient penalty loss function.
The introduced DMS comprises an internal circulation part and an external circulation part, the basic model is optimized, a MG-replay model with limited samples is constructed, the maximum mean deviation MMD is calculated on the basis of k iterations, the (k + 1) th iteration is increased, and the generalization capability of the model is improved;
and further optimizing the MG-replay model by using the generative countermeasure network GAN, thereby improving the effectiveness of a new sample with larger difference of prediction distribution.
The present invention will be further described with reference to the following examples.
Example 1:
the architecture of the whole algorithm provided by the invention is shown in fig. 1, and the specific work flow is as follows:
Figure BDA0003863082120000075
/>
Figure BDA0003863082120000081
for better illustration of the invention, the invention comprises the following steps:
(1) A distributed measurement strategy DMS is introduced, the DMS comprises an internal circulation part and an external circulation part, the replication is optimized, a MG-replication model with limited samples is constructed, the maximum average deviation MMD is calculated on the basis of k iterations, the (k + 1) th iteration is increased, and the generalization capability of the model is improved.
The method specifically comprises the following steps:
(11) Taking the CNN as a basic network of the model, and finally classifying the extracted one-dimensional characteristic data through a softmax layer;
(12) Introducing DMS to optimize the replay on the basis model: initializing network parameters and the number of times of outer circulation, updating the network parameters in the inner circulation, dividing input data of each user into k small batches, performing k iterations, and calculating the maximum average deviation MMD;
(13) After k iterations are performed, an additional (k + 1) th iteration is added;
(14) Calculating the average of the parameters of all iterations
Figure BDA0003863082120000091
And updates the network initialization parameters in the outer ring.
Further, the formula for calculating the maximum mean deviation MMD is:
Figure BDA0003863082120000092
in the formula, x i And y i Two examples are shown on two different minibatches of length m respectively.
Further, the formula for updating the network initialization parameters in the outer ring is as follows:
Figure BDA0003863082120000093
in the formula (II)>
Figure BDA0003863082120000094
Representing the update parameter in the ith iteration, epsilon represents the learning rate, and the gradient direction is determined by subtracting from the initial parameter phi.
The CNN is used as a basic network, and the CNN is used as a MG-duplicate basic network, and mainly includes four convolutional layers, four pool layers, a full connection layer, and a softmax layer. The convolutional layer can extract the characteristics of the data bottom layer, and the pooling layer can reduce the dimension of the characteristics and avoid overfitting. The features output from the last pooling layer are flattened by the fully connected layer. Finally, the one-dimensional data is classified by softmax layer. The window size in each pooling layer was set to 5. The convolution kernel size and the number of convolution kernels in each convolution layer are set to 3 and 800, respectively. To reduce overfitting of the model, the discard rate of the fully-connected layer was set to 0.5 during training.
The DMS is introduced on the base model. In order to effectively learn the prior knowledge of a plurality of users with less training samples, the duplicate is optimized by introducing DMS (including inner loop and outer loop). And introducing a DMS on the basis of the replay, updating network parameters by internal circulation, calculating MMD and adding an additional iteration process. The global optimum parameters are solved in the outer loop. Thus, the training can be guided towards the optimal generalization direction of the model, and the dependence of the model on the training samples can be reduced.
After the outer loop is completed, the globally optimal initialization parameters may be solved. It should be noted, however, that when updating the parameters in the outer loop, the learning rate is set to exponentially decrease, with the update step formula:
Figure BDA0003863082120000101
wherein i is the ith outer cycle. a and b are two parameters that control the update step size range to find the global optimal solution. In the inner loop, the learning rate is fixed to a constant of 0.001. Using the improved replay, training can be guidedTowards the optimal generalization direction of the model and reduces the dependence of the model on the training samples.
(2) And further optimizing the MG-replay model by using the generative countermeasure network GAN, and improving the effectiveness of a new sample with larger prediction distribution difference.
The method specifically comprises the following steps:
(21) Modifying the discriminator, wherein the classification part and the discrimination part share part of the network and differ only in the output layer, the discrimination part is used for estimating a sample distribution distance between the new user and the training user, and the distance calculation formula is:
Figure BDA0003863082120000111
in the formula, L dw Is a discriminating part f of the discriminator dw A distance measuring function of f g Representation feature extraction, X s Is a training sample, X t Is a new user sample, n s Is the number of training samples, n t Is the number of new user samples, L grad Is the gradient penalty loss function;
(22) The gradient distribution estimation of the small samples is inaccurate, which causes the feature extractor to be unable to effectively learn the sample distribution condition from the discrimination result, so the classification information of the discriminator is used to constrain the model optimization direction, and further the distribution distance on the class level is calculated, and the formula of the distance is:
Figure BDA0003863082120000112
in the formula, L dc Is a distance measure function of the classification part of the discriminator>
Figure BDA0003863082120000113
Is a one-hot encoded vector converted from the true label of the ith training sample, and->
Figure BDA0003863082120000114
Is the predicted output of the discriminator on the ith training sample>
Figure BDA0003863082120000115
And &>
Figure BDA0003863082120000116
Is a one-hot coded vector converted from the predicted output of the discriminator for the real tag and the i-th sample of the new user. Training samples with similar characteristic distribution with a new user estimated by a discriminator are selected for model updating, so that the expansion of the samples is realized, and the influence of less samples on the distribution estimation error is reduced;
(23) Predicting the class of the input sample:
Figure BDA0003863082120000117
in the formula, L c Is a global classification function, is>
Figure BDA0003863082120000118
Is the predicted output of the ith training sample, is>
Figure BDA0003863082120000119
Is the predicted output of the ith new user sample;
(24) Optimizing parameters:
Figure BDA0003863082120000121
in the formula, theta c Is a parameter of the ensemble classifier, θ g Is a parameter of the feature extractor, θ dc And theta dw Is a parameter of the discriminator, alpha and gamma are weight parameters, L dw And L dc Is a function of a discriminator, L grad Is the gradient penalty loss function.
It is noted that the model is improved in conjunction with GAN. Although the prior knowledge of a small number of training samples can be effectively learned according to the established model, the generalization performance of the prior knowledge to new users with large distribution differences is still poor. Therefore, we further optimized the model in conjunction with GAN. The previously constructed network is used as a feature extractor (generator) and the classifier is used for classification prediction. The discriminator is used to analyze the feature differences generated between the new user and the trained user. The adaptability of the model can be improved through antagonism training.
In this embodiment, 12 head gesture data are used to perform experiments on different algorithms, the experimental results are shown in fig. 2, and based on the proposed MG-repeat model, the experimental results show that the recognition average precision can reach 97.12%, which is significantly better than that of other algorithms.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (9)

1. An improved meta-learning algorithm, MG-replace, comprising:
constructing a basic model;
inputting a training sample and a new sample;
introducing a Distributed Measurement Strategy (DMS) to optimize the basic model;
the base model is optimized using a generative confrontation network GAN.
2. The improved meta-learning algorithm MG-repeat according to claim 1, wherein the introducing of the distributed measurement strategy DMS optimizes the base model, specifically comprising:
initializing network parameters and external cycle times;
updating the network parameters in an inner loop;
dividing input data of each user into k small batches, performing k iterations, and calculating the maximum average deviation MMD;
increasing the (k + 1) th iteration;
computing all iterationsAverage value of the parameters of
Figure FDA0003863082110000014
And updates the network initialization parameters in the outer ring.
3. The improved meta-learning algorithm MG-repeat according to claim 2, wherein the formula for calculating the maximum mean deviation MMD is:
Figure FDA0003863082110000011
in the formula, x i And y i Two examples are shown on two different minibatches of length m respectively.
4. The improved meta-learning algorithm MG-replace according to claim 2, wherein the formula for updating the network initialization parameters in the outer ring is:
Figure FDA0003863082110000012
in the formula (II)>
Figure FDA0003863082110000013
Represents the update parameter in the ith iteration and epsilon represents the learning rate.
5. The improved meta-learning algorithm MG-replay as claimed in claim 1, wherein said optimizing said base model using generative countermeasure networks GAN comprises:
modifying the discriminator, wherein the classification part and the discrimination part share part of the network, and the discrimination part is used for estimating the sample distribution distance between the new user and the training user;
utilizing the classification information of the discriminator to constrain the optimization direction of the model and further calculating the distribution distance on the class level;
predicting the category of the input sample according to the distribution distance on the category level;
and optimizing parameters according to the sample distribution distance between the new user and the training user and the distribution distance on the calculation type level.
6. The improved meta-learning algorithm MG-copy of claim 5, wherein the sample distribution distance between the new user and the training user is calculated by the formula:
Figure FDA0003863082110000021
in the formula, L dw Is a discriminating part f of the discriminator dw A distance measuring function of f g Representation feature extraction, X s Is a training sample, X t Is a new user sample, n s Is the number of training samples, n t Is the number of new user samples, L grad Is the gradient penalty loss function.
7. The improved meta-learning algorithm MG-replay of claim 5, wherein the distribution distance at class level is calculated by:
Figure FDA0003863082110000022
in the formula, L dc Is a distance metric function of a classification portion of a discriminator>
Figure FDA0003863082110000023
Is a one-hot encoded vector converted from the true label of the ith training sample, and->
Figure FDA0003863082110000031
Is the predicted output of the discriminator on the i-th training sample>
Figure FDA0003863082110000032
And &>
Figure FDA0003863082110000033
Is a one-hot coded vector converted from the predicted output of the discriminator for the real tag and the i-th sample of the new user.
8. The improved meta-learning algorithm MG-replay of claim 5, wherein the computational formula for predicting the class of input samples is:
Figure FDA0003863082110000034
in the formula, L c Is a global classification function, is>
Figure FDA0003863082110000035
Is the predicted output of the ith training sample, is>
Figure FDA0003863082110000036
Is the predicted output of the ith new user sample.
9. The improved meta-learning algorithm MG-copy of claim 5, wherein the calculation formula of the optimization parameter is:
Figure FDA0003863082110000037
in the formula, theta c Is a parameter of the ensemble classifier, θ g Is a parameter of the feature extractor, θ dc And theta dw Is a parameter of the discriminator, alpha and gamma are weight parameters, L dw And L dc Is a function of a discriminator, L grad Is the gradient penalty loss function. />
CN202211173214.XA 2022-09-26 2022-09-26 Improved meta-learning algorithm MG-copy Pending CN115905855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211173214.XA CN115905855A (en) 2022-09-26 2022-09-26 Improved meta-learning algorithm MG-copy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211173214.XA CN115905855A (en) 2022-09-26 2022-09-26 Improved meta-learning algorithm MG-copy

Publications (1)

Publication Number Publication Date
CN115905855A true CN115905855A (en) 2023-04-04

Family

ID=86488650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211173214.XA Pending CN115905855A (en) 2022-09-26 2022-09-26 Improved meta-learning algorithm MG-copy

Country Status (1)

Country Link
CN (1) CN115905855A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112762A (en) * 2023-04-17 2023-05-12 武汉理工大学三亚科教创新园 Meta-learning-based method for generating speaking video under supplementary data
CN116595443A (en) * 2023-07-17 2023-08-15 山东科技大学 Wireless signal book gesture recognition method based on meta learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112762A (en) * 2023-04-17 2023-05-12 武汉理工大学三亚科教创新园 Meta-learning-based method for generating speaking video under supplementary data
CN116595443A (en) * 2023-07-17 2023-08-15 山东科技大学 Wireless signal book gesture recognition method based on meta learning
CN116595443B (en) * 2023-07-17 2023-10-03 山东科技大学 Wireless signal book gesture recognition method based on meta learning

Similar Documents

Publication Publication Date Title
WO2020019236A1 (en) Loss-error-aware quantization of a low-bit neural network
CN115905855A (en) Improved meta-learning algorithm MG-copy
CN110875912A (en) Network intrusion detection method, device and storage medium based on deep learning
CN113326731A (en) Cross-domain pedestrian re-identification algorithm based on momentum network guidance
CN110889865B (en) Video target tracking method based on local weighted sparse feature selection
CN113128671B (en) Service demand dynamic prediction method and system based on multi-mode machine learning
CN110600085B (en) Tree-LSTM-based organic matter physicochemical property prediction method
Das et al. NAS-SGAN: a semi-supervised generative adversarial network model for atypia scoring of breast cancer histopathological images
CN111178527A (en) Progressive confrontation training method and device
CN116681104B (en) Model building and realizing method of distributed space diagram neural network
EP3874412A1 (en) Computer architecture for multiplier-less machine learning
Zheng Network intrusion detection model based on convolutional neural network
CN114596726B (en) Parking berth prediction method based on interpretable space-time attention mechanism
CN114266321A (en) Weak supervision fuzzy clustering algorithm based on unconstrained prior information mode
CN117117859A (en) Photovoltaic power generation power prediction method and system based on neural network
CN117034767A (en) Ceramic roller kiln temperature prediction method based on KPCA-GWO-GRU
KR20080078292A (en) Domain density description based incremental pattern classification method
CN116015967A (en) Industrial Internet intrusion detection method based on improved whale algorithm optimization DELM
CN111581467B (en) Partial mark learning method based on subspace representation and global disambiguation method
CN110378380B (en) Image classification method based on multi-core integrated classification learning
CN113656707A (en) Financing product recommendation method, system, storage medium and equipment
CN113408610A (en) Image identification method based on adaptive matrix iteration extreme learning machine
CN108304546B (en) Medical image retrieval method based on content similarity and Softmax classifier
CN112348275A (en) Regional ecological environment change prediction method based on online incremental learning
CN115294386B (en) Image classification method based on regularization supervision loss function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination