CN114330464A - Multi-terminal collaborative training algorithm and system fusing meta learning - Google Patents

Multi-terminal collaborative training algorithm and system fusing meta learning Download PDF

Info

Publication number
CN114330464A
CN114330464A CN202011033398.0A CN202011033398A CN114330464A CN 114330464 A CN114330464 A CN 114330464A CN 202011033398 A CN202011033398 A CN 202011033398A CN 114330464 A CN114330464 A CN 114330464A
Authority
CN
China
Prior art keywords
model
training
meta
learning
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011033398.0A
Other languages
Chinese (zh)
Inventor
王中风
王美琪
鲁安卓
薛瑞鑫
林军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN202011033398.0A priority Critical patent/CN114330464A/en
Publication of CN114330464A publication Critical patent/CN114330464A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a multi-terminal collaborative training algorithm and a multi-terminal collaborative training system fusing meta-learning, which comprises a client loading a training model located locally and initializing a weight parameter of a network; the client side adjusts the training model by using a locally stored data sample and a meta-learning algorithm to obtain an adjusted model; and the server performs fusion operation on the adjusted models transmitted from the plurality of clients to obtain an average model. According to the method, on the basis of federal learning, a meta-learning algorithm aiming at the situation of a small sample (namely a small amount of training data) is introduced into each client, meta-information in the small amount of sample can be efficiently obtained in training, the trained model has better mobility for new data, and the client model trained by the method has higher processing precision for data sets of other clients after being fused at the server.

Description

Multi-terminal collaborative training algorithm and system fusing meta learning
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a multi-terminal collaborative training algorithm and system fusing meta-learning.
Background
Today, any Artificial Intelligence (AI) project may involve multiple domains, and thus integration of data from various companies and departments is required. However, in practical applications, due to increasing concerns about ownership and privacy of data, privacy of users and security management of data become more and more strict, and it is almost impossible to integrate data distributed in various places and organizations. On the premise, training based on big data is a necessary guarantee for high precision of a certain AI project, so that a machine learning framework is designed on the premise of meeting privacy supervision requirements, and a federal learning algorithm is developed.
In federal learning, a common algorithm is shown in fig. 1, each client uses local data to train a respective model, then the trained models are transmitted to a server for fusion, and the fused models are transmitted back to each client for continuous training. Because the local data in each client is often very limited in number, the model obtained by the training algorithm often generates excessive adaptation to the local data, so that when the server fuses the models from different clients, each model cannot be quickly adapted to data processing on other clients, the overall accuracy is limited, and more rounds of communication are needed to obtain more thorough model fusion.
In the prior art, in order to reduce the number of communications as much as possible, the method is usually adopted to limit the number of local clients that need to communicate with the server, or an SGD (federal mean device) is adopted to test the loss at the local clients and communicate with the server to achieve the effect of co-training. However, although the calculation efficiency of the SGD is high, the method needs a lot of training to generate a more accurate model, and for most clients, the local data size of the client is far from the standard required by the SGD.
Disclosure of Invention
The application provides a multi-terminal collaborative training algorithm and system fusing meta-learning, and aims to solve the problems that in the prior art, a model trained by a client side through a small amount of data is poor in mobility and low in fusion accuracy.
In a first aspect, the present application provides a multi-terminal collaborative training algorithm with meta-learning fusion, including:
loading a training model located locally by a client and initializing a weight parameter of a network;
the client side adjusts the training model by using a locally stored data sample and a meta-learning algorithm to obtain an adjusted model;
and the server performs fusion operation on the adjusted models transmitted from the plurality of clients to obtain an average model.
In some embodiments, after obtaining the average model, the algorithm further comprises:
the server obtains a test data set containing data samples stored by all the clients, and evaluates the precision of the average model according to the test data set to obtain an evaluation result;
if the evaluation result meets the requirement, stopping data communication and training;
and if the evaluation result is not in accordance with the requirement, re-executing the step that the client side adjusts the training model by using the locally stored data sample and adopting a meta-learning algorithm to obtain an adjusted model.
In some embodiments, the data samples in the test data set are divided into a plurality of data packets according to different categories, where each data packet is represented by N-way K-shot, N is the number of randomly extracted categories in each data packet, way is a category, K is the number of data samples included in each category, and shot is a data unit.
In some embodiments, the step of adapting the training model using a meta-learning algorithm comprises:
a client randomly extracts a data packet from a locally stored data sample;
updating model parameters of the training model using an inner loop and an outer loop.
In some embodiments, updating the model parameters of the training model using an inner loop comprises:
establishing a plurality of tasks, wherein each task adopts a gradient descending rule, and an updated parameter theta is obtained based on an original parameter theta of a training modeli'; wherein i represents the ith task;
according to the updated parameter thetai' calculation of Cross entropy loss LTiSaid cross entropy loss LTiUpdated parameter theta obtained from all tasksi' add to get.
In some embodiments, the outer loop updates the model parameters of the training model using the following formula:
Figure BDA0002704393890000021
wherein, thetan is the model parameter of the adjusted model, beta is the learning rate, TiRefers to i tasks, Σ Ti (×) refers to summing tasks,
Figure BDA0002704393890000022
finger using parameter thetai' in the above paragraph.
In a second aspect, the present application further provides a system corresponding to the method provided in the first aspect.
According to the method, on the basis of federal learning, a meta-learning algorithm aiming at the situation of a small sample (namely a small amount of training data) is introduced into each client, meta-information in the small amount of sample can be efficiently obtained in training, the trained model has better mobility for new data, and the client model trained by the method has higher processing precision for data sets of other clients after being fused at the server.
Because the model trained by the client has better mobility, the required communication times of model fusion are obviously reduced, and for each client, the same model precision can be obtained by adopting less training times, shorter training time and lower energy consumption.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a schematic diagram of a common federated learning algorithm of the prior art;
FIG. 2 is a flow chart of a multi-terminal collaborative training algorithm incorporating meta-learning according to the present application;
FIG. 3 is an exploded step diagram of step S200 of the algorithm shown in FIG. 2;
FIG. 4 is a flowchart illustrating a multi-terminal collaborative training algorithm incorporating meta-learning according to another embodiment of the present application;
FIG. 5 is a flow chart of one embodiment of a method provided herein.
Detailed Description
In view of the overlapping of the small sample learning and the federal learning targets, namely, on the premise of realizing the privacy protection of the client side equipment data, a high-precision integrated model is trained, and meanwhile, the meta learning training scheme in the small sample can help the generalization of the model on unseen data to be enhanced, so that the meta learning and the federal learning are combined, the meta learning is introduced into the client side training of the federal learning, and the performance of the federal learning multi-terminal collaborative training is improved. The performance enhancement is divided into three aspects: firstly, on the premise of ensuring the learning performance, the overall communication times are reduced; secondly, on the premise of ensuring the learning performance, the number of training times of the end side is reduced; and thirdly, under the same training consumption, the precision of the integrated model is improved. The solution proposed by the invention is a scheme which introduces end-side meta learning into federal learning and is effective for the first time.
In the solution provided by the present application, the mentioned federal learning refers to a learning technique that allows users to take the benefits of a shared model from rich data without the need to centrally store the data. This approach also allows us to extend the learning task with inexpensive computations available at the edge of the network. The federal learning is characterized by suitable tasks: firstly, training of real data from mobile equipment has obvious advantages over training of proxy data generally available in data centers; secondly, the processed data is privacy sensitive or large-scale, so that the processed data is not suitable for being recorded in a data center for model training; for supervised tasks, the labels on the data set can be inferred naturally from the user's interaction with their device. When the problem of low data on the client side cannot be solved independently based on federal learning, the application provides an improved algorithm based on federal learning.
Referring to fig. 2, a schematic structural diagram of a multi-terminal collaborative training algorithm for fusion meta-learning according to the present application is shown;
as can be seen from fig. 2, when the multi-terminal collaborative training algorithm with meta learning provided in the embodiment of the present application is applied to each client, the multi-terminal collaborative training algorithm includes:
s100: loading a training model located locally by a client and initializing a weight parameter of a network;
in this embodiment, there may be a plurality of clients communicating with the server, and the method may be adopted for each client; each client (end side) is usually configured with different training models locally for performing training on local data samples, and in the present application, before reading a data sample, a weight parameter in a network needs to be initialized, so that the training models maintain initial settings.
S200: the client side adjusts the training model by using a locally stored data sample and a meta-learning algorithm to obtain an adjusted model;
in this embodiment, the data samples utilized by the client refer to a small number of data samples stored locally by the client, and different from other methods in the prior art, the method of the present application is particularly applied to a case where the number of samples is low, and the number of samples is correspondingly set to a low order of magnitude according to different data sample forms, for example, if the data samples are pictures, a small number of data samples here refer to tens to thousands of pictures, and a large number of data samples in the conventional technology generally refer to thousands of pictures; if the data samples are expressed in terms of data size, a small amount of data here may refer to data of several Kb to and Mb sizes, whereas a large amount of data samples in the conventional art generally refers to data in the order of GB or more, and so on.
Further, the data samples may be originally stored in the client, or may be obtained by the client in other manners, for example, the client collects from a scene, inputs by a user, and the like, or may be sent by the server in an appointed manner, if the data samples are sent by the server in an appointed manner, before step S200, the method of the present application further includes a data obtaining step, that is, the server sends data packets containing the data samples to the plurality of clients respectively, where it is to be noted that the data packets sent by the server to the respective clients are not repeated, and may be in a complementary relationship with each other.
For local client and limited data samples, the meta-learning algorithm is firstly introduced to adjust the original training model of the client, so that model parameters sent to the server by each client are not original model parameters but parameters more favorable for fusion.
In recent years, the small sample learning classification develops rapidly, and in the face of numerous classification tasks, task requirements can be met by training a model. The meta-learning method is more, and for the maximum applicability, the mechanism of the meta-learning is the universality of tasks, namely different tasks are faced without constructing different models, and the same set of learning algorithm can be used for solving various different tasks. The learnable parameter theta of one model is defined, and the corresponding task can be solved by changing the value of the parameter theta in different tasks. The value of the parameter theta can be learned through the meta-learner, when different tasks are faced, the theta value is continuously updated through a gradient descending method according to a loss function, the model is continuously close to the model capable of solving the task, and when the theta value is finally converged, the meta-learner is considered to learn a better parameter theta, so that the model can adaptively solve the corresponding task. This algorithm has the advantage of being efficient because it introduces no other parameters to the learner, and the strategy of training the learner uses known optimization procedures (e.g., gradient descent, etc.) rather than building one from scratch.
Specifically, referring to fig. 3, the step of adjusting the training model by using the meta-learning algorithm includes:
s210: a client randomly extracts a data packet from a locally stored data sample; each data packet is represented by N-way K-shot, N is the number of randomly extracted categories in each data packet, way is a category, K is the number of data samples contained in each category, and shot is a data unit, such as 5-way5-shot, namely 5 categories are randomly extracted from the remaining data samples each time, and then 5 data which are not extracted are extracted from the data contained in each category, so that 5-way5-shot is formed.
S220: updating model parameters of the training model using an inner loop and an outer loop.
In this embodiment, the inner loop is also referred to as a local loop, i.e., a process of updating model parameters is performed inside the client, and the outer loop is also referred to as a global loop, i.e., a process of updating model parameters is performed in the entire system including a plurality of clients and servers.
The internal circulation is divided into a plurality of tasks, each task is updated to obtain an updated parameter based on the initial parameter of the model by using the gradient descending rule, the model loss is calculated by using the updated parameter, and the specific flow is as follows:
firstly, obtaining a local training model;
in each round of circulation, a plurality of tasks are established, each task adopts a gradient descending rule, model loss is evaluated on the extracted data packet, and an updated parameter theta is obtained based on an original parameter theta of a training modeli’;
According to the updated parameter thetai' calculation of Cross entropy loss LTiSaid cross entropy loss LTiUpdated parameter theta obtained from all tasksi' add to get;
Figure BDA0002704393890000051
wherein i represents the ith task; alpha is the learning rate; l isTiIs the cross entropy loss;
the outer loop is used for calculating and obtaining parameters for updating the original training model by using the following formula after all tasks of the inner loop are finished:
Figure BDA0002704393890000052
wherein, thetan is the model parameter of the adjusted model, TiRefers to i tasks, Σ Ti (×) refers to summing tasks,
Figure BDA0002704393890000053
finger using parameter thetaiIn the model of' β is the learning rate.
It should be noted that, in some feasible embodiments, the client performing the outer loop operation is not all the clients in the system that are in communication connection with the server, that is, each time the outer loop may randomly select a certain proportion of the clients to perform the inner loop, for example, select 20% of the clients to perform local training, and calculate the updated model parameters for the training results of the part of data; during the next external circulation, randomly selecting 20% of the other clients to execute the internal circulation to obtain updated model parameters; this advantageously reduces the individual consumption of the system, while also reducing the number of communications.
S300: and the server performs fusion operation on the adjusted models transmitted from the plurality of clients to obtain an average model.
In this step, after receiving the adjusted training models (the training models with updated parameters) of the clients, the server performs fusion operation according to common means in the prior art, specifically, the fusion operation may adopt various manners in the prior art, such as weighted average operation, operation of solving L2 norm of each model, and the like; the specific means is not limited in the present embodiment, and it should be understood that the method of performing the fusion operation on the model can be applied to the present application.
The average model obtained in step S300 is obtained based on the fusion of multiple clients, but may not have a higher accuracy for each new small amount of data of the client after only one fusion, and therefore, in some embodiments shown in fig. 4, a step of evaluating the accuracy of the average model needs to be added:
s400: the server obtains a test data set containing data samples stored by all the clients, and evaluates the precision of the average model according to the test data set to obtain an evaluation result; the test data set is equivalent to integrating data samples stored in each client, and is the same as the data samples in composition, the test data set is also divided into a plurality of data packets according to the data samples according to categories, wherein each data packet is represented by an N-way K-shot, N is the number of the randomly extracted categories in each data packet, and K is the number of the data samples contained in each category.
Evaluating the precision of the average model by adopting the test data set, namely judging whether the average model can be applied to all data samples in the test data set, if the data sample with lower precision exists, considering that the average model does not meet the requirement, and continuously executing further adjustment; if the average model meets the requirements, the average model can be considered as a final model, continuous training and communication processes can be stopped, resource consumption is saved, and efficiency is improved.
S410: if the evaluation result meets the requirement, stopping data communication and training;
s420: if the evaluation result is that the requirement is not satisfied, which indicates that the average model at this time needs to be further adjusted, the steps of S200-S300 are re-executed, the data packet is re-extracted, and the inner loop and the outer loop are executed.
In the application, the meta-learning algorithm suitable for small sample learning is introduced into the client, so that the model on the end side can be adjusted to the mode most suitable for the new category, and even if the new category is added, the model after fusion cannot meet the requirements, the model can be adjusted through a few simple steps, that is, in the practical application of the step S420, the number of cycles is extremely small, and the data communication and training process can be stopped through only a few cycles.
Referring to fig. 5, as a flowchart of one embodiment of the method provided by the present application, first, a data set for training may be numbered by category by a server, and sampled to form data packets of a training set, each data packet is in the form of 5-way5-shot, 5 categories are randomly extracted from the remaining data for each divided data packet, 5 data which have not been extracted are extracted from 500 data of each category, so as to form 5-way5-shot, and 50 test data packets may be formed, each data packet is in the form of 5-way5-shot, and the method is similar to a training set data packet, but is not required to be allocated to a client at all.
After the data packets are divided, distributing part of the training set data packets to 10 clients, wherein the data packets among the clients are not repeated.
The training is divided into an inner loop (local loop) and an outer loop (global loop). In the outer loop, the inner loop of each client is performed in parallel, when the outer loop starts, the server can simultaneously communicate the updated model parameters to each client, the client performs the inner loop and the outer loop according to the data packet stored by the client and the model parameters, the model of the client is continuously updated, after the inner loop and the outer loop are finished, the model of the client is evaluated once on the test set, if the new model meets the requirement, the training and the communication are stopped, and if the new model does not meet the requirement, the inner loop and the outer loop are performed again by using the new model.
According to the technical scheme, the multi-terminal collaborative training algorithm fusing the meta-learning is provided, and comprises the steps that a client loads a training model located locally and initializes a weight parameter of a network; the client side adjusts the training model by using a locally stored data sample and a meta-learning algorithm to obtain an adjusted model; and the server performs fusion operation on the adjusted models transmitted from the plurality of clients to obtain an average model. According to the method, on the basis of federal learning, a meta-learning algorithm aiming at the situation of a small sample (namely a small amount of training data) is introduced into each client, meta-information in the small amount of sample can be efficiently obtained in training, the trained model has better mobility for new data, and the client model trained by the method has higher processing precision for data sets of other clients after being fused at the server.
Corresponding to the algorithm, the application also provides a multi-terminal collaborative training system fusing meta-learning, which comprises:
the system comprises a server and a plurality of clients in communication connection with the server;
the client is configured to perform the following method:
loading a training model located locally and initializing a weight parameter of a network;
adjusting the training model by using a locally stored data sample and adopting a meta-learning algorithm to obtain an adjusted model;
sending the adjusted model to a server;
the server is configured to perform the following method:
and carrying out fusion operation on the adjusted models transmitted from the plurality of clients to obtain an average model.
Further, the server is further configured to:
obtaining a test data set containing data samples stored by all clients, and evaluating the precision of the average model according to the test data set to obtain an evaluation result;
if the evaluation result meets the requirement, stopping data communication and training;
and if the evaluation result is not in accordance with the requirement, sending a control instruction to the corresponding client to enable the client to execute the step of adjusting the training model by using the locally stored data sample and adopting a meta-learning algorithm to obtain an adjusted model.
Further, the client is configured with:
the extraction unit is used for randomly extracting a data packet from the data samples stored locally;
and the parameter updating unit is used for updating the model parameters of the training model by utilizing the inner circulation and the outer circulation.
Updating model parameters of the training model using an inner loop comprises:
establishing a plurality of tasks, wherein each task adopts a gradient descending rule, and an updated parameter theta is obtained based on an original parameter theta of a training modeli'; wherein i represents the ith task;
according to the updated parameter thetai' calculation of Cross entropy loss LTiSaid cross entropy loss LTiUpdated parameter theta obtained from all tasksi' add to get;
the model parameters of the training model updated by the outer loop are obtained by adopting the following formula:
Figure BDA0002704393890000071
and thetan is a model parameter of the adjusted model.
The functional functions of the system provided by this embodiment refer to the description in the foregoing method embodiments, and are not described herein again.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (10)

1. A multi-terminal collaborative training algorithm fusing meta-learning is characterized by comprising the following steps:
loading a training model located locally by a client and initializing a weight parameter of a network;
the client side adjusts the training model by using a locally stored data sample and a meta-learning algorithm to obtain an adjusted model;
and the server performs fusion operation on the adjusted models transmitted from the plurality of clients to obtain an average model.
2. The multi-terminal collaborative training algorithm fusing meta-learning according to claim 1, wherein after obtaining an average model, the algorithm further comprises:
the server obtains a test data set containing data samples stored by all the clients, and evaluates the precision of the average model according to the test data set to obtain an evaluation result;
if the evaluation result meets the requirement, stopping data communication and training;
and if the evaluation result is not in accordance with the requirement, re-executing the step that the client side adjusts the training model by using the locally stored data sample and adopting a meta-learning algorithm to obtain an adjusted model.
3. The multi-terminal collaborative training algorithm fusing meta-learning according to claim 2, wherein the data samples in the test data set are divided into a plurality of data packets according to different categories, wherein each data packet is represented by N-way K-shot, N is the number of randomly extracted categories in each data packet, way is a category, K is the number of data samples included in each category, and shot is a data unit.
4. The multi-terminal collaborative training algorithm with meta-learning fused according to claim 3, wherein the step of adapting the training model with the meta-learning algorithm comprises:
a client randomly extracts a data packet from a locally stored data sample;
updating model parameters of the training model using an inner loop and an outer loop.
5. The multi-terminal collaborative training algorithm fusing meta-learning according to claim 4, wherein updating the model parameters of the training model by using inner loop comprises:
establishing a plurality of tasks, wherein each task adopts a gradient descending rule, and an updated parameter theta is obtained based on an original parameter theta of a training modeli'; wherein i represents the ith task;
according to the updated parameter thetai' calculation of Cross entropy loss LTiSaid cross entropy loss LTiUpdated parameter theta obtained from all tasksi' add to get.
6. The multi-terminal collaborative training algorithm with fusion of meta-learning according to claim 5, wherein the model parameters for the outer loop updating of the training model are obtained by using the following formula:
Figure FDA0002704393880000011
wherein, thetan is the model parameter of the adjusted model, beta is the learning rate, TiRefers to i tasks, Σ Ti (×) refers to summing tasks,
Figure FDA0002704393880000012
finger using parameter thetai' in the above paragraph.
7. A multi-terminal collaborative training system fusing meta-learning is characterized by comprising a server and a plurality of clients in communication connection with the server;
the client is configured to perform the following method:
loading a training model located locally and initializing a weight parameter of a network;
adjusting the training model by using a locally stored data sample and adopting a meta-learning algorithm to obtain an adjusted model;
sending the adjusted model to a server;
the server is configured to perform the following method:
and carrying out fusion operation on the adjusted models transmitted from the plurality of clients to obtain an average model.
8. The multi-terminal collaborative training system fusing meta-learning according to claim 7, wherein the server is further configured to:
obtaining a test data set containing data samples stored by all clients, and evaluating the precision of the average model according to the test data set to obtain an evaluation result;
if the evaluation result meets the requirement, stopping data communication and training;
and if the evaluation result is not in accordance with the requirement, sending a control instruction to the corresponding client to enable the client to execute the step of adjusting the training model by using the locally stored data sample and adopting a meta-learning algorithm to obtain an adjusted model.
9. The multi-terminal collaborative training system fusing meta-learning according to claim 8, wherein the client is configured with:
the extraction unit is used for randomly extracting a data packet from the data samples stored locally;
and the parameter updating unit is used for updating the model parameters of the training model by utilizing the inner circulation and the outer circulation.
10. The multi-terminal collaborative training system fusing meta-learning according to claim 9, wherein updating the model parameters of the training model using inner loop comprises:
establishing a plurality of tasks, wherein each task adopts a gradient descending rule, and an updated parameter theta is obtained based on an original parameter theta of a training modeli'; wherein i represents the ith task;
According to the updated parameter thetai' calculation of Cross entropy loss LTiSaid cross entropy loss LTiUpdated parameter theta obtained from all tasksi' add to get;
the model parameters of the training model updated by the outer loop are obtained by adopting the following formula:
Figure FDA0002704393880000021
wherein, thetan is the model parameter of the adjusted model, beta is the learning rate, TiRefers to i tasks, Σ Ti (×) refers to summing tasks,
Figure FDA0002704393880000022
finger using parameter thetai' in the above paragraph.
CN202011033398.0A 2020-09-27 2020-09-27 Multi-terminal collaborative training algorithm and system fusing meta learning Pending CN114330464A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011033398.0A CN114330464A (en) 2020-09-27 2020-09-27 Multi-terminal collaborative training algorithm and system fusing meta learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011033398.0A CN114330464A (en) 2020-09-27 2020-09-27 Multi-terminal collaborative training algorithm and system fusing meta learning

Publications (1)

Publication Number Publication Date
CN114330464A true CN114330464A (en) 2022-04-12

Family

ID=81011183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011033398.0A Pending CN114330464A (en) 2020-09-27 2020-09-27 Multi-terminal collaborative training algorithm and system fusing meta learning

Country Status (1)

Country Link
CN (1) CN114330464A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297008A (en) * 2022-07-07 2022-11-04 鹏城实验室 Intelligent computing network-based collaborative training method and device, terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297008A (en) * 2022-07-07 2022-11-04 鹏城实验室 Intelligent computing network-based collaborative training method and device, terminal and storage medium
CN115297008B (en) * 2022-07-07 2023-08-22 鹏城实验室 Collaborative training method, device, terminal and storage medium based on intelligent computing network

Similar Documents

Publication Publication Date Title
Xu et al. Trust-aware service offloading for video surveillance in edge computing enabled internet of vehicles
US10691494B2 (en) Method and device for virtual resource allocation, modeling, and data prediction
CN111625361B (en) Joint learning framework based on cooperation of cloud server and IoT (Internet of things) equipment
CN114219097B (en) Federal learning training and predicting method and system based on heterogeneous resources
CN107403173A (en) A kind of face identification system and method
CN111222628B (en) Method, device, system and readable storage medium for optimizing training of recurrent neural network
Tuor et al. Demo abstract: Distributed machine learning at resource-limited edge nodes
CN110458572B (en) User risk determining method and target risk recognition model establishing method
CN113408209A (en) Cross-sample federal classification modeling method and device, storage medium and electronic equipment
CN113469373A (en) Model training method, system, equipment and storage medium based on federal learning
CN111813539A (en) Edge computing resource allocation method based on priority and cooperation
CN115249073A (en) Method and device for federated learning
WO2024045581A1 (en) Privacy protection data sharing method and system based on distributed gan
CN110929041A (en) Entity alignment method and system based on layered attention mechanism
WO2022217210A1 (en) Privacy-aware pruning in machine learning
CN115481441A (en) Difference privacy protection method and device for federal learning
CN113642700A (en) Cross-platform multi-modal public opinion analysis method based on federal learning and edge calculation
CN113902131A (en) Updating method of node model for resisting discrimination propagation in federal learning
Liu et al. Task offloading optimization of cruising UAV with fixed trajectory
CN114330464A (en) Multi-terminal collaborative training algorithm and system fusing meta learning
Lorido-Botran et al. ImpalaE: Towards an optimal policy for efficient resource management at the edge
CN110874638B (en) Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system
CN112052399A (en) Data processing method and device and computer readable storage medium
CN114021473A (en) Training method and device of machine learning model, electronic equipment and storage medium
CN113946758B (en) Data identification method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination