CN115204249A - Group intelligent meta-learning method based on competition mechanism - Google Patents

Group intelligent meta-learning method based on competition mechanism Download PDF

Info

Publication number
CN115204249A
CN115204249A CN202210533337.3A CN202210533337A CN115204249A CN 115204249 A CN115204249 A CN 115204249A CN 202210533337 A CN202210533337 A CN 202210533337A CN 115204249 A CN115204249 A CN 115204249A
Authority
CN
China
Prior art keywords
meta
parameters
learning
learner
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210533337.3A
Other languages
Chinese (zh)
Inventor
王钢
翁博熙
孙健
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210533337.3A priority Critical patent/CN115204249A/en
Publication of CN115204249A publication Critical patent/CN115204249A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a group intelligent meta-learning method based on a competition mechanism, which can reduce the influence of the initialization of meta-learner parameters and accelerate the convergence rate of an algorithm. S1, multitask data and a network model required by meta learning are prepared. And S2, randomly extracting tasks in the maximum training period of the outer ring element learner, and initializing fast parameters on the tasks. And S3, calculating the gradient of the loss function to the network model parameter, updating the fast parameter, and repeating the S3 until the inner loop iteration times are completed to obtain the final fast parameter on the task. And S4, updating the parameters of the corresponding meta learner by using the quick parameters on each task. S5, comparing the learning effect of each meta-learner, wherein the best learning effect is winner, and the rest are loser; the parameters of winner remain unchanged, and the parameters of loser are updated towards the parameters of winner. And (5) repeatedly executing the steps S3-S5 until the maximum training period of the outer-loop meta-learner is reached, and obtaining a plurality of meta-learners with strong learning effects.

Description

Group intelligent meta-learning method based on competition mechanism
Technical Field
The invention belongs to the field of artificial intelligence, particularly relates to a cross-task small sample element learning problem in deep learning and reinforcement learning, and particularly relates to a group intelligent element learning method based on a competition mechanism.
Background
With the vigorous development of artificial intelligence technology, deep learning and reinforcement learning are applied more and more. Although deep learning and reinforcement learning are increasingly recognized due to good learning effect and excellent working performance, the technologies have some key problems which are not solved yet, wherein the cross-task small sample learning is one of the core problems. For example, in the global "smart manufacturing" industry transition wave, the discrete manufacturing industry has also met with new opportunities along with the rise of artificial intelligence technology. However, in the face of the requirements of discrete manufacturing personalized customization and flexible production, how to learn a model across tasks, make a decision and quickly schedule resources to adapt to production is still a big problem. The rapid learning is the natural talent of human beings and is one of the most remarkable characteristics of human intelligence, but the artificial intelligence is difficult to imitate the talent of human beings, so that the development and application of the artificial intelligence technology are always limited by insufficient rapid learning capability. Whether efficient migration of learning experience across tasks or rapid development of new skills from small samples of short-time, small experience, these tasks that humans can easily accomplish are very challenging for human intelligence.
The core idea of meta-learning is to guide learning on a new task by using knowledge and experience in past tasks, or a network has the ability of how to learn through past learning experiences. The essence of the method is that the generalization capability of the meta-learner on multiple tasks is increased, and the network can adapt to the requirements of new tasks more quickly by sampling samples of two layers of tasks and data, so that the fast learning of cross-task small samples is realized. Finn C et al first proposed a Model-independent general meta-Learning method, called MAML algorithm, in the literature (Model-adaptive meta-Learning for fast adaptation of deep networks, international Conference on Machine Learning, PMLR, pp.1126-1135, july.2017). The method updates the meta-learner by using the resultant gradient directions learned on different tasks, thereby obtaining an optimal initial learner which can become an excellent working model on a new task by rapidly updating parameters. On the basis, nichol A et al propose a meta-learning method based On a first-order simplified MAML algorithm in the literature (On first-order meta-learning algorithms, arXiv preprints, 2018), which is called a replay algorithm. The method abandons the use of the synthetic gradient, and enables the meta-learner to directly update the temporary parameters obtained by learning on different tasks, thereby greatly reducing the calculated amount on the premise of keeping the learning effect. Furthermore, flannerhag S et al have proposed an auto-guided meta-learning method in the literature (boottracked meta-learning, arXiv preprint, 2021). The method generates the target learner on different tasks, and guides and optimizes the meta-learner by minimizing the distance to the target under the selected measurement, thereby effectively expanding the learning visual field and enhancing the universality of the meta-learning.
The enumerated meta-learning methods are all based on single meta-learner learning, are greatly influenced by random initialization under the condition of complex multitask, and have low convergence rate. Therefore, how to reduce the influence of initialization and improve the convergence speed.
Disclosure of Invention
In view of the above, the present invention provides a group intelligence meta-learning method based on a competition mechanism, which introduces a plurality of meta-learners, and allocates different initialized parameters and data to the meta-learners based on the competition mechanism of group intelligence, so that the meta-learners compete with each other and stimulate in the parameter updating process, thereby reducing the influence of initialization, enhancing the generalization ability of the algorithm at the task level, and accelerating the convergence rate of the algorithm. .
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
s1, constructing a population consisting of a plurality of meta-learners; multitask data and a network model required for meta learning are prepared.
And S2, randomly extracting tasks in the maximum training period n of the outer ring element learner, and initializing the quick parameters on the extracted tasks.
S3, calculating the gradient of the loss function to the network model parameters, and updating the fast parameters by using a gradient descent method; and (5) increasing the iteration times of the inner ring by 1, obtaining the final quick parameters on the extracted task when the iteration times of the inner ring reaches the maximum updating times of the inner ring, and executing S4, otherwise, executing S3 again.
And S4, each meta learner updates the parameters of the corresponding meta learner by using a gradient descent method by using the updated fast parameters on each task.
S5, after each meta-learner finishes parameter updating, comparing the learning effect of each meta-learner according to a set index, wherein the meta-learner with the best learning effect is marked as winner, and the rest meta-learners are marked as loser; the parameters of winner remain unchanged, and the parameters of loser are updated towards the parameters of winner.
And (5) repeatedly executing the steps S3-S5 until the cycle number reaches the maximum training period n of the outer ring meta-learner, and finally obtaining a plurality of meta-learners with strong learning effects.
Further, multitask data and a network model required by meta-learning are prepared;
wherein the task data is divided into a training task set D train And test task set D test The former is used for the whole training process, and the latter is used for testing the final learning effect;
the data in each task is divided into a support set S and a query set Q which are respectively used for training and testing on each task; s1 further comprises: randomly initializing parameters of a meta-learner, setting algorithm hyper-parameters including a rapid parameter updating step length alpha, a meta-learning step length beta, a competitive learning step length gamma, a maximum training period n of an outer ring meta-learner and a maximum updating time m of an inner ring rapid parameter, and initializing an outer ring training period t =0.
Further, in the maximum training period n of the outer loop element learner, S2 randomly extracts a task, and initializes a fast parameter on the extracted task, specifically: if the outer ring training period t reaches the outer ring maximum training period n, finishing training; otherwise, for eachAn individual learner for randomly extracting small batches of tasks from the training task D
Figure BDA0003641824160000031
For each task
Figure BDA0003641824160000032
A support set sample is randomly extracted, and a fast parameter theta 'on the task is initialized' i =θ。
Further, S3 specifically is: calculating a loss function
Figure BDA0003641824160000041
Gradient for network model parameter theta
Figure BDA0003641824160000042
Then updating the fast parameter theta 'by using a gradient descent method' i (ii) a The iteration number k of the inner ring is increased by 1; if k reaches the maximum updating time m of the inner ring rapid parameter, obtaining the final rapid parameter theta 'on the task' i Entering S4; otherwise, returning to re-execute S3.
Further, S4 specifically is: the meta learners use the fast parameter θ 'updated on the tasks' i Obtaining a loss function of the network on the query set using the fast parameters
Figure BDA0003641824160000043
And calculating the corresponding gradient
Figure BDA0003641824160000044
Updating parameters of the corresponding meta learner by using a gradient descent method:
Figure BDA0003641824160000045
Figure BDA0003641824160000046
further, in S5, the learning effect of each meta learner is compared according to the set index, and the learning effect of each meta learner is compared using the test accuracy of the meta learner after updating the test task m times as the index.
Updating the parameter of loser to the parameter of winner, i.e. increasing gamma (theta) on the basis of the original parameter wl ) γ is the competitive learning step, θ w Is a parameter of winner, [ theta ] l Is a parameter of loser.
Has the advantages that:
1. the invention provides a group intelligent meta-learning method based on a competition mechanism, which is characterized in that in the practical deep learning and reinforcement learning application process such as discrete manufacturing, the task complexity is often higher, and the common meta-learning method is basically based on a single meta-learner, so that the uncertainty is larger when the parameters of the learner are initialized, the stability of the algorithm is deteriorated, and the convergence speed of the algorithm is reduced.
2. The group intelligent meta-learning method based on the competition mechanism, which is designed by the invention, introduces the competition mechanism in group intelligence into the design of the meta-learning method for the first time, effectively relieves the problem that the current meta-learning method is greatly influenced by random initialization, obviously improves the convergence speed of the algorithm, and enhances the generalization capability of the algorithm at the task level.
3. According to the group intelligent meta-learning method based on the competition mechanism, provided by the invention, the capability of learning new skills more quickly can be obtained through historical experience. The basic idea of the method is to update the meta-learner parameters based on the composite gradient of the updated parameters of the inner loop of each task to obtain an initial learner (parameters) which is optimal for the future task, so that the initial learner can be updated into an excellent working model (such as a classifier) through simple learning.
4. The invention provides a group intelligence meta-learning method based on a competition mechanism, wherein group intelligence is embodied in that a plurality of meta-learners are not provided with a central controller, but forms a group in a decentralized form. Through competition (or cooperation) mode after each meta-learning update among the groups (a plurality of meta-learners)
5. According to the group intelligent meta-learning method based on the competition mechanism, the competition mechanism is used in the meta-learning method and is not influenced by task types, network architectures and data attributes, and the method has strong universality. In the intelligent manufacturing industrial transformation process, the requirements of discrete manufacturing personalized customization and flexible production can be met, and the method has a strong application value. The method has important significance for solving the problem of cross-task sample learning, promoting the development of discrete manufacturing and expanding the application scene of artificial intelligence.
Drawings
FIG. 1 is a schematic diagram of an operation structure of a group intelligence meta-learning method based on a competition mechanism according to the present invention;
FIG. 2 is a schematic diagram of a detailed flow chart of a group intelligent meta-learning method based on a competition mechanism according to the present invention;
fig. 3 is a diagram illustrating an operation effect of an embodiment of a group intelligence meta-learning method based on a competition mechanism according to the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a group intelligent meta-learning method based on a competition mechanism, which leads meta-learners to compete and stimulate with each other in the learning process and effectively exchange information by introducing the competition mechanism in group intelligence into meta-learning, thereby accelerating convergence. In addition, due to the randomness of different meta-learners on initialization parameters and data, the influence of initialization is greatly reduced, and the generalization capability of the algorithm on a task level is strengthened.
The meta learning method in the invention refers to a method for learning how to learn, i.e. how to obtain the ability of learning new skills more quickly through historical experience. The basic idea of the method is to update the meta-learner parameters based on the synthetic gradient of updated parameters of the inner loop of each task to obtain an initial learner (parameters) optimal for the future task, so that the initial learner can be updated to an excellent working model (such as a classifier) through simple learning.
The group intelligence in the invention is embodied in that a plurality of element machines do not have a central controller, but form a group in a decentralized form. The information is interacted among the populations (a plurality of meta-learners) in a competitive (or cooperative) mode after each meta-learning update, and the populations learn with each other, so that the effect of mutually stimulating and promoting learning is achieved.
As shown in fig. 1, an embodiment of the present invention provides a group intelligence meta-learning method based on a competition mechanism, including the following steps:
s1, multitask data D and a network model M required by meta learning are prepared. Dividing task data into a training task set D train And test task set D test The former is used for the whole training process, and the latter is used for testing the final learning effect. The data in each task is divided into a support set S and a query set Q, which are respectively used for training and testing on each task. Randomly initializing parameters of the meta-learner, setting algorithm hyper-parameters including a rapid parameter updating step length alpha, a meta-learning step length beta, a competitive learning step length gamma, a maximum training period n of the outer ring meta-learner, the number m of times of rapid parameter updating of the inner ring, and initializing an outer ring training period t =0.
And S2, if the outer ring training period t reaches the outer ring maximum training period n, ending the training. Otherwise, for each meta-learner, randomly extracting a small batch of tasks from the training tasks D
Figure BDA0003641824160000071
For each task
Figure BDA0003641824160000072
A support set sample is randomly extracted, and a fast parameter theta 'on the task is initialized' i =θ。
S3, calculating a loss function
Figure BDA0003641824160000073
Gradient for network model parameter θ
Figure BDA0003641824160000074
Usually using a cross-entropy function
Figure BDA0003641824160000075
Wherein y is a value of the tag, and,
Figure BDA0003641824160000076
the model output value. The fast parameters are then updated using a gradient descent method:
Figure BDA0003641824160000077
the inner loop iteration number k = k +1. If k reaches the maximum updating time m of the inner ring, the final quick parameter theta 'on the task is obtained' i The process proceeds to S4. Otherwise, returning to execute S3 again.
S4, each meta learner uses the fast parameter theta 'updated on each task' i Obtaining a loss function of the network on the query set using the fast parameters
Figure BDA0003641824160000078
And calculating the corresponding gradient
Figure BDA0003641824160000079
Updating parameters of the corresponding meta learner by using a gradient descent method:
Figure BDA00036418241600000710
and S5, after each meta-learner completes parameter updating, comparing the learning effect of each meta-learner by taking the test accuracy of the meta-learner after updating for m times on the test task as an index. The meta-learner with the highest accuracy which is considered to have the best learning effect in the stage is marked as winner in the stage, and the other meta-learners are marked as loser.
For the winner in this stage, considering that it has learned the most effective information in this stage, its parameters remain unchanged: theta.theta. w ←θ w . For all losers, considering that they are imported to winner in the current round of competition, it needs to learn the experience from winner, so let the parameter of loser be updated in the direction of winner's parameter: theta l ←θ l +γ(θ wl )。
And (5) repeatedly executing the operations from S3 to S5 until the cycle number reaches the maximum training period n of the outer ring meta-learner, and finally obtaining a plurality of meta-learners with stronger learning effect.
Fig. 2 is a flowchart of a group intelligence meta-learning method based on a competition mechanism provided by the present invention.
As shown in fig. 3, fig. 3 is an example of training a convolutional neural network on a mini ImageNet data set by using a competition mechanism-based swarm intelligence meta-learning method provided by the present invention. The convolutional neural network is an artificial neural network with a deep structure, and is mainly characterized in that convolutional calculation is used in the characteristic extraction processing process. The method is widely applied to the field of computer vision such as image recognition, classification and the like. The mini ImageNet section used was selected from the ImageNet dataset, which is one of the most widely used datasets in the field of meta-learning and small-sample learning, and comprises 60000 color pictures in 100 classes, each of which has 600 samples and has a sample size of 84 × 84. The convolutional neural network used in this example contains 4 convolutional layers (containing the ReLU activation function, max pooling layer, and batch normalization process) and one fully-connected layer. The number of the meta-learners is set to be 3, the algorithm hyper-parameter is set to be a quick parameter updating step length alpha =0.01, the meta-learning step length beta =0.001, the competition learning step length gamma =0.0005, the maximum training period n =5000 of the outer-loop meta-learner, and the inner-loop quick parameter updating times m =5. The operation effect is shown in fig. 3, in the operation effect diagram, the group intelligent meta-learning method (MAML-co) based on the competition mechanism provided by the invention is compared with the MAML algorithm in effect, and the accuracy of training and testing is compared, so that the remarkable advantage of the group intelligent meta-learning method based on the competition mechanism provided by the invention in convergence speed is embodied.
The group intelligent meta-learning method based on the competition mechanism provided by the embodiment of the invention can be realized by a software program and is stored on a computer-readable storage medium, wherein computer instructions are stored on the storage medium, and when being executed by a processor, the instructions can realize the steps in the group intelligent meta-learning method based on the competition mechanism.
In addition, another embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the aforementioned method for group intelligence meta-learning based on a contention mechanism.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A group intelligent meta-learning method based on a competition mechanism is characterized by comprising the following steps:
s1, constructing a population consisting of a plurality of meta learners; preparing multitask data and a network model required by meta-learning;
s2, randomly extracting tasks in the maximum training period n of the outer ring element learner, and initializing fast parameters on the extracted tasks;
s3, calculating the gradient of the loss function to the network model parameters, and updating the fast parameters by using a gradient descent method; increasing the iteration number of the inner ring by 1, obtaining the final quick parameters on the extracted task when the iteration number of the inner ring reaches the maximum updating number of the inner ring, and executing S4, otherwise, executing S3 again;
s4, each meta learner updates the parameters of the corresponding meta learner by using a gradient descent method by using the updated quick parameters on each task;
s5, after each meta-learner finishes parameter updating, comparing the learning effect of each meta-learner according to a set index, wherein the meta-learner with the best learning effect is marked as winner, and the rest meta-learners are marked as loser; the parameters of the winner are kept unchanged, and the parameters of the loser are updated to the parameters of the winner;
and (5) repeatedly executing the steps S3-S5 until the cycle number reaches the maximum training period n of the outer ring meta-learner, and finally obtaining a plurality of meta-learners.
2. The method according to claim 1, wherein the multitask data and network model required for meta-learning is prepared;
wherein the task data is divided into a training task set D train And test task set D test The former is used for the whole training process, and the latter is used for testing the final learning effect;
the data in each task are divided into a support set S and a query set Q which are respectively used for training and testing on each task;
the S1 further comprises: randomly initializing parameters of a meta-learner, setting algorithm hyper-parameters including a rapid parameter updating step length alpha, a meta-learning step length beta, a competitive learning step length gamma, a maximum training period n of an outer ring meta-learner and a maximum updating time m of an inner ring rapid parameter, and initializing an outer ring training period t =0.
3. The method as claimed in claim 1 or 2, wherein in the maximum training period n of the extrinsic element learner, in step S2, the task is randomly extracted, and the fast parameters on the extracted task are initialized, specifically:
if the outer ring training period t reaches the outer ring maximum training period n, finishing training; otherwise, for each meta-learner, randomly extracting a small batch of tasks from the training task D
Figure FDA0003641824150000021
For each task
Figure FDA0003641824150000022
A support set sample is randomly extracted, and a fast parameter theta 'on the task is initialized' i =θ。
4. The method for group intelligent meta-learning based on competition mechanism according to claim 3, wherein the S3 specifically comprises:
calculating a loss function
Figure FDA0003641824150000023
Gradient for network model parameter θ
Figure FDA0003641824150000024
Then updating the fast parameter theta 'by using a gradient descent method' i (ii) a The iteration number k of the inner ring is increased by 1; if k reaches the maximum updating time m of the inner ring rapid parameter, obtaining the final rapid parameter theta 'on the task' i Entering S4; otherwise, returning to re-execute S3.
5. The method for group intelligent meta-learning based on competition mechanism according to claim 4, wherein the S4 specifically comprises: the meta learners use the fast parameter θ 'updated on the tasks' i Obtaining a loss function of the network on the query set using the fast parameters
Figure FDA0003641824150000025
And calculating the corresponding gradient
Figure FDA0003641824150000026
Updating the parameters of the corresponding element learner by using a gradient descent method:
Figure FDA0003641824150000027
Figure FDA0003641824150000028
6. the method as claimed in claim 4, wherein in step S5, the learning effects of the meta-learners are compared according to a set index, and the learning effects of the meta-learners are compared by using the test accuracy of the meta-learners after the meta-learners are updated m times on the test tasks as an index.
Updating the parameter of loser to the parameter of winner, namely increasing gamma (theta) on the basis of the original parameter wl ) γ is the competitive learning step, θ w Is a parameter of winner, [ theta ] l Is a parameter of loser.
CN202210533337.3A 2022-05-13 2022-05-13 Group intelligent meta-learning method based on competition mechanism Pending CN115204249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210533337.3A CN115204249A (en) 2022-05-13 2022-05-13 Group intelligent meta-learning method based on competition mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210533337.3A CN115204249A (en) 2022-05-13 2022-05-13 Group intelligent meta-learning method based on competition mechanism

Publications (1)

Publication Number Publication Date
CN115204249A true CN115204249A (en) 2022-10-18

Family

ID=83574484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210533337.3A Pending CN115204249A (en) 2022-05-13 2022-05-13 Group intelligent meta-learning method based on competition mechanism

Country Status (1)

Country Link
CN (1) CN115204249A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114121A1 (en) * 2022-11-30 2024-06-06 南京邮电大学 Method for constructing intelligent computation engine of artificial intelligence cross-platform model on basis of knowledge self-evolution

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114121A1 (en) * 2022-11-30 2024-06-06 南京邮电大学 Method for constructing intelligent computation engine of artificial intelligence cross-platform model on basis of knowledge self-evolution

Similar Documents

Publication Publication Date Title
Guo et al. Simple convolutional neural network on image classification
CN107833183B (en) Method for simultaneously super-resolving and coloring satellite image based on multitask deep neural network
Goodfellow Nips 2016 tutorial: Generative adversarial networks
CN112465151A (en) Multi-agent federal cooperation method based on deep reinforcement learning
CN109829541A (en) Deep neural network incremental training method and system based on learning automaton
CN110794842A (en) Reinforced learning path planning algorithm based on potential field
CN109948029A (en) Based on the adaptive depth hashing image searching method of neural network
CN111401261B (en) Robot gesture recognition method based on GAN-CNN framework
US11538178B2 (en) Machine learning-based 2D structured image generation
CN114548591A (en) Time sequence data prediction method and system based on hybrid deep learning model and Stacking
CN115204249A (en) Group intelligent meta-learning method based on competition mechanism
CN109657791A (en) It is a kind of based on cerebral nerve cynapse memory mechanism towards open world successive learning method
CN111079837A (en) Method for detecting, identifying and classifying two-dimensional gray level images
CN114511042A (en) Model training method and device, storage medium and electronic device
CN113537365A (en) Multitask learning self-adaptive balancing method based on information entropy dynamic weighting
Wistuba Bayesian optimization combined with incremental evaluation for neural network architecture optimization
CN116938323B (en) Satellite transponder resource allocation method based on reinforcement learning
CN111445024B (en) Medical image recognition training method
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
CN116339942A (en) Self-adaptive scheduling method of distributed training task based on reinforcement learning
CN109034279A (en) Handwriting model training method, hand-written character recognizing method, device, equipment and medium
CN107832833B (en) Scene recognition method, device and system based on chaotic autonomous development neural network
CN116402138A (en) Time sequence knowledge graph reasoning method and system for multi-granularity historical aggregation
Shobeiri et al. Shapley value in convolutional neural networks (CNNs): A Comparative Study
CN109978133A (en) A kind of intensified learning moving method based on action mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination