CN112560904A - Small sample target identification method based on self-adaptive model unknown element learning - Google Patents
Small sample target identification method based on self-adaptive model unknown element learning Download PDFInfo
- Publication number
- CN112560904A CN112560904A CN202011388919.4A CN202011388919A CN112560904A CN 112560904 A CN112560904 A CN 112560904A CN 202011388919 A CN202011388919 A CN 202011388919A CN 112560904 A CN112560904 A CN 112560904A
- Authority
- CN
- China
- Prior art keywords
- learning
- meta
- task
- small sample
- representing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000003044 adaptive effect Effects 0.000 claims abstract description 24
- 125000004122 cyclic group Chemical group 0.000 claims description 17
- 230000006870 function Effects 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 4
- 101100455978 Arabidopsis thaliana MAM1 gene Proteins 0.000 abstract 3
- 230000008901 benefit Effects 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a small sample target recognition algorithm based on self-adaptive model unknown element learning, which comprises the following steps: step S101: initializing small sample target identification model parameters based on adaptive model unknown element learning; step S201: using a meta-learning method to iteratively update the specific parameters of the task; step S301: and optimizing the small sample identification model by utilizing the synthetic gradient direction. The method effectively improves the model generalization capability of the first-order MAML algorithm and the model convergence speed while maintaining the basic characteristics of the first-order MAML algorithm, and simultaneously maintains the space-time overhead almost consistent with the first-order MAML algorithm.
Description
Technical Field
The invention belongs to the field of small sample learning in machine learning, and particularly relates to a construction method of a small sample target recognition algorithm (hereinafter, referred to as a small sample target recognition algorithm based on adaptive MAML) based on adaptive model unknown element learning.
Background
The existing machine learning method can well solve the classification task under a large-capacity balanced data set, but under-fitting can be caused by insufficient data volume when small sample learning is carried out, so that the performance is poor. Whether a good model can be obtained by training with only a small amount of data becomes a key problem for small sample learning.
The meta-learning can well process small sample classification tasks, aims to enable a machine to learn by learning, and performs pre-learning on a plurality of similar small sample classification tasks by a systematic and data-driven method, so that the previously learned knowledge is adopted when a new task is faced, the decision process is guided, the new task is adapted to, and the purpose of fast learning of a model is achieved.
The small sample object recognition algorithm closely related to the present invention is a model-agnostic meta-learning (MAML) algorithm. However, the MAML algorithm needs to calculate the second-order gradient in the outer layer update phase, which results in a training process that occupies a large space-time overhead. Currently, there are a series of first order MAML algorithms that simplify the computation by ignoring second order gradient terms (fomal, reple), but lose some of the gradient information, resulting in varying degrees of accuracy loss.
Disclosure of Invention
In order to solve the problem of small sample classification in the field of machine learning, the invention provides a small sample target identification method based on adaptive model unknown meta-learning, which comprises the following steps:
step S101: initializing small sample identification model parameters based on self-adaptive model unknown element learning;
step S201: using a meta-learning method to iteratively update the specific parameters of the task;
step S301: and optimizing the small sample identification model by utilizing the synthetic gradient direction.
Further, the step S101 includes:
initializing parameters of a small sample identification model based on self-adaptive unknown element learning, wherein the parameters comprise: meta-model parameter theta, inner layer cyclic learning rate alpha, outer layer meta cyclic learning rate beta, inner layer cyclic super gradient step length alpha0And outer layer element circulation super gradient step length beta0。
Further, the step S201 includes:
sub-step S201 a: dividing a meta-learning task according to the auxiliary data set;
sub-step S201 b: inputting the divided meta-learning tasks into a small sample identification model based on self-adaptive model unknown meta-learning;
sub-step S201 c: and updating the specific parameters of the task according to a random gradient descent algorithm.
Further, the sub-step S201 a: the dividing of the meta-learning task according to the auxiliary data set specifically includes:
dividing small sample learning tasks according to N-type and K-type sample formats, and selecting 5-type and 3-type sample formats by default for explanation, wherein 5-type and 3-type samples refer to 5 types sampled from an auxiliary data set, 4 samples of each type are divided into a support set and a query set, the support set comprises 5 types, 3 samples of each type comprise 1 residual sample of each type in the 5 types, the model trains each task on the support set, and the training precision is verified on the query set.
Further, the sub-step S201 c: according to the divided meta-learning task, updating the specific parameters of the task by using a random gradient descent algorithm, wherein the expression of the specific parameters of the updated task is as follows:
wherein,representing the concrete parameters of the j step of the meta-learning task i; alpha is alphatRepresenting the inner loop learning rate of the t iteration;gradient operations representing continuous functions; l isiAnd k is the number of the small sample learning tasks.
Further, the step S301 includes:
sub-step S301 a: updating the average gradient direction according to the specific parameters of the task and the meta-learning module;
sub-step S301 b: updating the inner-layer cyclic learning rate according to the specific parameters of the task and the meta-learning module;
sub-step S301 c: and updating the outer-layer meta-learning rate according to the task specific parameters and the meta-learning module.
Further, the sub-step S301 a: updating the average gradient direction according to the specific parameters of the task and the meta-learning module, and updating the average gradient direction by adopting a first-order approximate gradient technology, wherein the specific expression is as follows:
wherein, thetatA meta-learning parameter representing a t-th iteration; beta is at-1Representing the outer-layer meta-learning rate of the t-th iteration;and (5) representing the specific parameters of the small sample learning task at the kth step of the meta-learning task i.
Further, the sub-step S301 b: updating the inner-layer cyclic learning rate according to the specific parameters of the task and the meta-learning module; the expression for updating the inner loop learning rate is as follows:
wherein alpha istRepresenting the inner loop learning rate of the t iteration; alpha is alpha0Representing the inner layer circulation super-gradient step size;representing concrete parameters of a small sample learning task in the jth step of the meta-learning task i;gradient operations representing continuous functions; l isiRepresenting the loss function of the evaluation meta-learning task i.
Further, the sub-step S301 c: updating the outer-layer meta-learning rate according to the specific parameters of the task and the meta-learning module; the expression for updating the outer-layer meta-learning rate is:
wherein, thetatA meta-learning parameter representing a t-th iteration; beta is atRepresenting the outer-layer meta-learning rate of the t-th iteration; beta is a0Representing the step size of the outer layer element cyclic super-gradient;and (5) representing the specific parameters of the small sample learning task at the kth step of the meta-learning task i.
Has the advantages that:
the invention provides a small sample target identification method based on self-adaptive model unknown element learning, which effectively improves the model generalization capability of a first-order MAML algorithm and the model convergence speed while maintaining the basic characteristics of the first-order MAML, and simultaneously maintains the space-time overhead almost consistent with the first-order MAML algorithm.
Drawings
FIG. 1 is a flow chart of a small sample target identification method based on adaptive model agnostic meta-learning according to the present invention;
FIG. 2(a) is a small sample classification task of the method of the present invention on Omniglot datasets;
FIG. 2(b) is a small sample classification task of the method of the present invention on the Mini-ImageNet dataset;
FIG. 3(a) 5-class-5-sample experiments on Omniglot datasets;
FIG. 3(b) 5-class-1-sample experiments on Omniglot data set;
FIG. 3(c) 5-class-5-sample experiments on Mini-ImageNet dataset;
FIG. 3(d) 5-class-1-sample experiments on Mini-ImageNet dataset;
fig. 4 is a convergence test of the adaptive MAML-based small sample target recognition algorithm according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.
The invention provides a small sample target identification method based on adaptive model unknown element learning, which comprises the following steps:
step S101: initializing adaptive MAML based small sample identification model parameters.
The method comprises the following steps of initializing parameters of a small sample identification model based on the self-adaptive MAML, wherein the parameters comprise: meta-model parameter theta, inner layer cyclic learning rate alpha, outer layer meta cyclic learning rate beta, inner layer cyclic super gradient step length alpha0And outer layer element circulation super gradient step length beta0. In general, the present invention recommends for alpha0And beta0By adopting the super-gradient step length setting of 1e-4, specific learning rate and meta-model parameters need to be manually selected to be proper values aiming at different tasks so as to achieve better effect.
Step S201: and (3) performing iterative updating on the specific parameters of the task by using a meta-learning method, wherein the iterative updating comprises the following steps:
sub-step S201 a: the meta-learning task is partitioned according to the auxiliary data set.
The invention divides the small sample learning task according to the N-type and K-sample formats, and selects the 5-type and 3-sample formats by default for explanation. The 5-class and 3-sample means that 5 classes of samples are sampled from the auxiliary data set, 4 samples of each class are divided into a support set and a query set, the support set comprises 5 classes of samples, 3 samples of each class, and the query set comprises the remaining 1 sample of each class of the 5 classes. The model trains each task on the support set, and verifies the training precision on the query set. In general, the present invention recommends partitioning the small sample learning task using a 5-class, 5-sample format or a 5-class, 1-sample format.
Sub-step S201 b: and inputting the divided meta-learning tasks into a small sample recognition model based on the self-adaptive MAML.
Sub-step S201 c: and updating specific parameters of the task by using a random gradient descent algorithm according to the divided meta-learning task.
The expression for updating the specific parameters of the task is as follows:
wherein,representing the concrete parameters of the j step of the meta-learning task i; alpha is alphatRepresenting the inner loop learning rate of the t iteration;gradient operations representing continuous functions; l isiRepresents a dedicated loss function for the evaluation meta-learning task i.
Step S301: optimizing the small sample identification model by using the direction of the synthetic gradient, wherein the method comprises the following steps:
sub-step S301 a: and updating the average gradient direction according to the task specific parameters and the meta-learning module.
Updating the average gradient direction by adopting a first-order approximate gradient technology, wherein the specific expression is as follows:
wherein, thetatA meta-learning parameter representing a t-th iteration; beta is at-1Representing the outer-layer meta-learning rate of the t-th iteration;and (4) representing the k-th step task specific parameters of the meta-learning task i.
Sub-step S301 b: updating the inner-layer cyclic learning rate according to the specific parameters of the task and the meta-learning module;
the expression for updating the inner loop learning rate is as follows:
wherein alpha istRepresenting the inner loop learning rate of the t iteration; alpha is alpha0Representing the inner layer circulation super-gradient step size;representing the concrete parameters of the j step of the meta-learning task i;gradient operations representing continuous functions; l isiRepresents a dedicated loss function for the evaluation meta-learning task i.
Sub-step S301 c: updating the outer-layer meta-learning rate according to the specific parameters of the task and the meta-learning module; the expression for updating the outer-layer meta-learning rate is:
wherein, thetatA meta-learning parameter representing a t-th iteration; beta is atRepresenting the outer-layer meta-learning rate of the t-th iteration; beta is a0Representing the step size of the outer layer element cyclic super-gradient;and (4) representing the k-th step task specific parameters of the meta-learning task i.
The first embodiment is as follows: and (3) performing model precision test on Omniglot and Mini-ImageNet data sets by a small sample target recognition algorithm based on the self-adaptive MAML.
For the present image classification problem (the second embodiment, the third embodiment and the same way), the loss function L of the small sample target recognition algorithm based on the adaptive MAMLiAre cross entropy loss functions.
For the present image classification problem (the second embodiment, the same way as the third embodiment), it is based onThe adaptive MAML small sample target recognition algorithm uses a standard four-layer convolutional neural network, each layer of which has a size of 3 × 32, followed by a ReLU activation function, a BN layer, and a pooling layer. For the present embodiment (the second embodiment, the same way as the third embodiment), the specific task parametersI.e. the parameters of the standard four-layer convolutional neural network.
As can be seen from fig. 2(a) and (b), compared with other first-order MAML algorithms, the small sample target recognition algorithm based on the adaptive MAML has the advantage of model generalization capability in the small sample classification task on the Omniglot and Mini-ImageNet data set, and can effectively alleviate the model precision loss caused by neglecting the second-order gradient term.
Example two: and (3) testing convergence rate of a small sample target recognition algorithm based on the self-adaptive MAML and other first-order MAML algorithms.
As can be seen from fig. 3(a) and 3(c), when the division specification of 5-class-5-samples is used on the omniroot and Mini-ImageNet data sets, the adaptive MAML-based small sample target identification algorithm has an advantage in convergence speed compared with the replay algorithm belonging to the first-order MAML algorithm, and can reach smaller loss values with a small number of iterations.
As can be seen from fig. 3(b) and fig. 3(d), when the division specification of the 5-class-1-sample is used on the omniroot and Mini-ImageNet data set, the adaptive MAML-based small sample target identification algorithm still has advantages in convergence speed compared with the replay algorithm belonging to the first-order MAML algorithm, and can reach smaller loss values through a small number of iterations.
Example three: and (3) carrying out convergence test on the small sample target identification algorithm based on the self-adaptive MAML.
Through tests, the outer layer circulation super gradient step length beta is modified within a certain range (1 e-3-1 e-6)0The convergence influence on the small sample target identification algorithm based on the self-adaptive MAML is very small. Based on the invention, the super-gradient step setting of 1e-4 is recommended for the outer loop.
Adaptive MAML based small sampleThe convergence of the target identification algorithm depends on alpha of the step size of the inner loop super gradient0Is appropriately selected. The gradient of the loss function along with the inner loop is changed, as shown in FIG. 4, and based on the experiment of FIG. 4, the present invention recommends the super-gradient step setting of 1e-4 for the inner loop.
So far, the present invention has been described in detail with reference to the embodiments and the drawings of the embodiments. From the above description, those skilled in the art should clearly recognize the present invention.
It is to be noted that, in the attached drawings or in the description, the implementation modes not shown or described are all the modes known by the ordinary skilled person in the field of technology, and are not described in detail. In addition, the above definitions of the various elements are not limited to the specific structures, shapes or modes mentioned in the embodiments, and those skilled in the art may easily modify or replace them, for example:
(1) directional phrases used in the embodiments, such as "upper", "lower", "front", "rear", "left", "right", etc., refer only to the orientation of the attached drawings and are not intended to limit the scope of the present invention;
(2) the embodiments described above may be mixed and matched with each other or with other embodiments based on design and reliability considerations, i.e. technical features in different embodiments may be freely combined to form further embodiments.
The above-mentioned embodiments further explain the objects, technical solutions and advantages of the present invention in detail. It should be understood that the above-mentioned embodiments are only some specific examples of the present invention, and are not intended to limit the present invention, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A small sample target identification method based on adaptive model unknown element learning comprises the following steps:
step S101: initializing small sample target identification model parameters based on adaptive model unknown element learning;
step S201: using a meta-learning method to iteratively update the specific parameters of the task;
step S301: and optimizing the small sample identification model by utilizing the synthetic gradient direction.
2. The method for small sample target recognition based on adaptive model-agnostic learning as claimed in claim 1, wherein the step S101 comprises:
initializing parameters of a small sample identification model based on self-adaptive unknown element learning, wherein the parameters comprise: meta-model parameter theta, inner layer cyclic learning rate alpha, outer layer meta cyclic learning rate beta, inner layer cyclic super gradient step length alpha0And outer layer element circulation super gradient step length beta0。
3. The method for identifying small sample objects based on adaptive model agnostic learning as claimed in claim 1, wherein the step S201 comprises:
sub-step S201 a: dividing a meta-learning task according to the auxiliary data set;
sub-step S201 b: inputting the divided meta-learning tasks into a small sample identification model based on self-adaptive model unknown meta-learning;
sub-step S201 c: and updating the specific parameters of the task according to a random gradient descent algorithm.
4. The adaptive model agnostic learning-based small sample object recognition method as claimed in claim 1, wherein the sub-step S201 a: dividing a meta-learning task according to an auxiliary data set, specifically comprising:
and dividing a plurality of small sample learning tasks according to the N-type and K-sample formats. The default selection of 5-class and 3-sample formats is used for explanation, 5-class and 3-sample means that 5 classes and 4 samples of each class are sampled from an auxiliary data set and are divided into a support set and a query set, the support set comprises 5 classes and 3 samples of each class, the query set comprises the remaining 1 sample of each class in the 5 classes, a model trains each task on the support set, and the training precision is verified on the query set.
5. The adaptive model agnostic learning-based small sample object recognition method as claimed in claim 1, wherein the sub-step S201 c: according to the divided meta-learning task, updating the specific parameters of the task by using a random gradient descent algorithm, wherein the expression of the specific parameters of the updated task is as follows:
wherein,representing concrete parameters of a small sample learning task in the jth step of the meta-learning task i; alpha is alphatRepresenting the inner loop learning rate of the t iteration;gradient operations representing continuous functions; l isiAnd k is the number of the small sample learning tasks.
6. The method for small sample target recognition based on adaptive model-agnostic learning as claimed in claim 1, wherein the step S301 comprises:
sub-step S301 a: updating the average gradient direction according to the specific parameters of the task and the meta-learning module;
sub-step S301 b: updating the inner-layer cyclic learning rate according to the specific parameters of the task and the meta-learning module;
sub-step S301 c: and updating the outer-layer meta-learning rate according to the task specific parameters and the meta-learning module.
7. The adaptive model agnostic learning-based small sample object recognition method of claim 1, wherein the sub-step S301 a: updating the average gradient direction according to the specific parameters of the task and the meta-learning module, and updating the average gradient direction by adopting a first-order approximate gradient technology, wherein the specific expression is as follows:
8. The adaptive model agnostic learning-based small sample object recognition method of claim 1, wherein the sub-step S301 b: updating the inner-layer cyclic learning rate according to the specific parameters of the task and the meta-learning module; the expression for updating the inner loop learning rate is as follows:
wherein alpha istRepresenting the inner loop learning rate of the t iteration; alpha is alpha0Representing the inner layer circulation super-gradient step size;representing concrete parameters of a small sample learning task in the jth step of the meta-learning task i;gradient operations representing continuous functions; l isiRepresenting the loss function of the evaluation meta-learning task i.
9. The adaptive model agnostic learning-based small sample object recognition method of claim 1, wherein the sub-step S301 c: updating the outer-layer meta-learning rate according to the specific parameters of the task and the meta-learning module; the expression for updating the outer-layer meta-learning rate is:
wherein, thetatA meta-learning parameter representing a t-th iteration; beta is atRepresenting the outer-layer meta-learning rate of the t-th iteration; beta is a0Representing the step size of the outer layer element cyclic super-gradient;and (5) representing the specific parameters of the small sample learning task at the kth step of the meta-learning task i.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011388919.4A CN112560904A (en) | 2020-12-01 | 2020-12-01 | Small sample target identification method based on self-adaptive model unknown element learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011388919.4A CN112560904A (en) | 2020-12-01 | 2020-12-01 | Small sample target identification method based on self-adaptive model unknown element learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112560904A true CN112560904A (en) | 2021-03-26 |
Family
ID=75047735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011388919.4A Pending CN112560904A (en) | 2020-12-01 | 2020-12-01 | Small sample target identification method based on self-adaptive model unknown element learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560904A (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738301A (en) * | 2020-05-28 | 2020-10-02 | 华南理工大学 | Long-tail distribution image data identification method based on two-channel learning |
-
2020
- 2020-12-01 CN CN202011388919.4A patent/CN112560904A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738301A (en) * | 2020-05-28 | 2020-10-02 | 华南理工大学 | Long-tail distribution image data identification method based on two-channel learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Arora et al. | Understanding gradient descent on the edge of stability in deep learning | |
Chen et al. | A tutorial on network embeddings | |
US20220076150A1 (en) | Method, apparatus and system for estimating causality among observed variables | |
Ba et al. | Blending diverse physical priors with neural networks | |
CN111814897A (en) | Time series data classification method based on multi-level shape | |
CN111027636B (en) | Unsupervised feature selection method and system based on multi-label learning | |
Constantinopoulos et al. | An incremental training method for the probabilistic RBF network | |
CN108108455A (en) | Method for pushing, device, storage medium and the electronic equipment of destination | |
Xiong et al. | Recursive learning for sparse Markov models | |
CN108197225A (en) | Sorting technique, device, storage medium and the electronic equipment of image | |
Kimura et al. | A column-wise update algorithm for nonnegative matrix factorization in Bregman divergence with an orthogonal constraint | |
Neto et al. | Opposite maps: Vector quantization algorithms for building reduced-set SVM and LSSVM classifiers | |
CN113255873A (en) | Clustering longicorn herd optimization method, system, computer equipment and storage medium | |
Castellano et al. | An empirical risk functional to improve learning in a neuro-fuzzy classifier | |
Ren et al. | Structured optimal graph-based clustering with flexible embedding | |
Lember et al. | Regenerativity of Viterbi process for pairwise Markov models | |
Nguyen et al. | InfoCNF: An efficient conditional continuous normalizing flow with adaptive solvers | |
CN114399653A (en) | Fast multi-view discrete clustering method and system based on anchor point diagram | |
CN118094216A (en) | Multi-modal model optimization retrieval training method and storage medium | |
CN112560904A (en) | Small sample target identification method based on self-adaptive model unknown element learning | |
KR20210119208A (en) | Training method for model that imitates expert and apparatus thereof | |
Chen et al. | On balancing neighborhood and global replacement strategies in MOEA/D | |
Kolaczyk et al. | Statistical models for network graphs | |
CN114124729A (en) | Dynamic heterogeneous network representation method based on meta-path | |
Nguyen et al. | A novel online Bayes classifier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210326 |
|
RJ01 | Rejection of invention patent application after publication |