CN117389734A - Federal learning node selection method based on gradient difference - Google Patents

Federal learning node selection method based on gradient difference Download PDF

Info

Publication number
CN117389734A
CN117389734A CN202311399520.XA CN202311399520A CN117389734A CN 117389734 A CN117389734 A CN 117389734A CN 202311399520 A CN202311399520 A CN 202311399520A CN 117389734 A CN117389734 A CN 117389734A
Authority
CN
China
Prior art keywords
client
model
data
gradient
clients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311399520.XA
Other languages
Chinese (zh)
Inventor
王高丽
吉白
刘虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yishi Intelligent Technology Co ltd
East China Normal University
Original Assignee
Shanghai Yishi Intelligent Technology Co ltd
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yishi Intelligent Technology Co ltd, East China Normal University filed Critical Shanghai Yishi Intelligent Technology Co ltd
Priority to CN202311399520.XA priority Critical patent/CN117389734A/en
Publication of CN117389734A publication Critical patent/CN117389734A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a federal learning node selection method based on gradient difference, which is characterized in that nodes which positively contribute to a global model are screened out by analyzing gradient information of a client model. The method specifically comprises the following steps: system initialization, client data preprocessing, local model training, node selection based on gradient difference, global model updating, model performance evaluation and the like. Compared with the prior art, the method has the effect that the data cannot be locally output by the client under federal learning, is favorable for keeping data privacy, and the global model benefits from data diversity, so that the problems of uneven data distribution and uneven node data quality which are common in federal learning are effectively solved, the method is easy to implement, has wide application prospect, and is particularly suitable for the scene of multi-data source collaborative modeling and data privacy guarantee.

Description

Federal learning node selection method based on gradient difference
Technical Field
The invention relates to the technical field of machine learning, in particular to a federal learning node selection method based on gradient difference.
Background
Federal learning is used as a distributed machine learning method, and the core objective is to realize common model training of multiparty data sources while ensuring data privacy and security so as to optimize the performance of a machine learning model. Federal learning allows data holders to implement collaborative modeling without directly exchanging raw data, thereby ensuring that data privacy is not compromised. Typical federal learning procedures include: firstly, an initial model is created by a central server and shared to all participants; each participant trains this model using its local data and calculates model updates; these updates are then returned to the central server; finally, the central server integrates all updates to optimize the global model.
However, conventional federal learning strategies face some challenges. In each round of training, the central server either selects all nodes or randomly selects a proportion of the nodes for local training. However, due to the distributed nature, there is often a distribution non-uniformity in the data owned by each node. This means that the data of some nodes may cover only a limited sample class or that there is a significant difference in their data distribution from the global data distribution. If nodes with skewed data distributions are selected for training in federal learning, model updates they generate may adversely affect the global model, resulting in reduced model performance or slower convergence rates.
The federal learning in the prior art has the problems of low node selection efficiency and affected model performance. Therefore, how to efficiently select nodes to ensure the robustness and performance of the global model becomes a key issue in federal learning. This requires in-depth analysis and selection of the data distribution and model gradients of the participating nodes to optimize the training effect of the global model.
Disclosure of Invention
The invention aims to provide a federal learning node selection method based on gradient difference, aiming at the defects of the prior art, a lightweight convolutional neural network CNN is adopted as a learning model, gradient information of a client model is analyzed, node screening is carried out by utilizing gradient difference, nodes which positively contribute to a global model are screened, and nodes with poor data quality or large data distribution difference are prevented from being selected. The model adopts a lightweight convolutional neural network CNN to process images and multidimensional data design, and can effectively capture spatial features in data, so that the prediction accuracy of the model is improved. Under the federal learning architecture, data are always stored in each client, and are not required to be uploaded to a central server, so that the data privacy is ensured, the global model benefits from the data diversity of each node, the training efficiency and performance of the global model are improved, the safety of the data privacy is ensured, the method is simple and convenient, the implementation is easy, the method has wide application prospects, the method is particularly suitable for a scene in which collaborative modeling is required to be performed on a plurality of data sources, and meanwhile, the data privacy is ensured.
The specific technical scheme for realizing the aim of the invention is as follows: the federal learning node selection method based on gradient difference is characterized in that node screening is performed by utilizing gradient difference, a lightweight convolutional neural network CNN is adopted as a learning model, and the implementation details of the method specifically comprise the following steps:
step A: system initialization
The central server announces federal learning tasks and invites clients to participate, and each client decides whether to respond to the request according to the configuration, if so, receives initial model parameters from the server.
And (B) step (B): client data preprocessing
Each client divides the data it holds into a training data set and a test data set, and then performs normalization and batch processing on the data to facilitate model training.
Step C: local model training
Each client trains on the local data using the CNN model, calculates model gradients, and uploads these gradients to the central server.
Step D: node selection based on gradient differences
The server receives gradient information from each client, performs node screening based on gradient differences, and the selected nodes participate in the model updating of the next round.
Step E: global model update
The server aggregates and updates the global model according to the gradient information of the selected client and sends updated model parameters back to the selected client.
Step F: model performance assessment
The server evaluates the performance of the model after each global update, and the training process is terminated after a preset performance standard or iteration number is reached.
The step A further comprises the following steps:
step A1: the central server broadcasts the modeling inviting each client to participate in the federal learning process.
Step A2: after receiving the invitation, the client can decide whether to participate in the modeling task, and the client decided to participate will agree with the server and log in to the central system through a secure connection.
Step A3: the server will distribute the preliminary model parameters for the participating clients, which each client downloads and initializes its local model.
The step B further comprises the following steps:
step B1: each client first examines and collates its local data, as it may be collected by different data sources or devices, to ensure that they are complete and continuous.
Step B2: each client divides its data into two parts, a training data set and a test data set, to ensure that the model can be validated and tested at a later stage.
Step B3: in order to make the data have comparability among different clients and ensure the stability of the model, each client can perform standardization processing on the data, such as adjusting the data to be in the range of 0-1.
Step B4: at the final stage of data preprocessing, the client organizes the data into a format suitable for deep learning model processing, such as reorganizing the data into a series of batches, each batch including the input data and corresponding tags.
The step C further comprises:
step C1: the constructed CNN model has the following structure: the model consists of a plurality of convolution layers and a full connection layer, wherein the input feature number received by the first convolution layer of the model is 1, the output feature number is 32, a convolution kernel of 5x5 is used, the step length is 1, no filling is performed, and a ReLU activation function and a 2x2 maximum pooling layer are matched; the second convolution layer converts the 32 feature maps into 64 feature maps, and the convolution kernel of 5x5, the step length of 1 and no filling are used, and the maximum pooling layer of 2x2 is matched with the ReLU activation function; the feature map processed by the two convolution layers is flattened and then input into a full connection layer; the input dimension of this fully connected layer is 1024, the output dimension is 512, and the ReLU activation function is used; followed by another fully connected layer, converting 512 features into 10 categories.
Step C2: each client transmits own data into the CNN model and trains locally, and after training is completed, the gradient of the model is calculated and uploaded to a central server.
The step D further includes:
step D1: receiving gradients of training models
The central server receives the gradient of its local training model from each client.
Step D2: the central server calculates the degree of difference based on gradient information between each pair of clients
By comparing the gradients of the two clients and calculating the L2 norm between them, a difference matrix is obtained, wherein each element represents the difference between the two clients.
Step D3: random greedy node selection based on degree of variance
In each iteration, the central server randomly selects a group of clients and selects a client with the smallest difference from the selected client set from the group of clients, and the method ensures that the selected client set is representative and keeps gradient diversity.
Step D4: returning selected clients
The central server returns the selected set of clients to the system that will participate in the next round of model updates.
Compared with the prior art, the method has the advantages that the common problems of uneven data distribution and uneven node data quality in federal learning are solved well, the node selection efficiency and model performance of federal learning are improved greatly by introducing a node selection strategy based on gradient differences. The strategy ensures that more representative and valuable nodes are selected to participate in the next round of model updating with a higher probability in the federal learning process, thereby accelerating the convergence rate of the model and improving the performance of the final model. Experimental results show that the method can improve model training efficiency and accuracy.
Drawings
FIG. 1 is a block diagram of a system constructed in accordance with the present invention;
FIG. 2 is a flow chart of the present invention;
fig. 3 is a diagram of the CNN model structure.
Detailed Description
The process, conditions, experimental methods, etc. for carrying out the present invention will be described in further detail with reference to the following specific examples and drawings, and the present invention is not limited to the general knowledge and common knowledge in the art, except for the following specific references.
Example 1
Referring to FIG. 1, the present invention relates to a central server and M client nodes { Node }, a 1 ,Node 2 ,......,Node m -wherein: the central server acts as a coordinator of federal learning, initializes model parameters and distributes them to the various edge devices, and is responsible for centrally managing and updating the global model. The client node is the actual computing node that participates in federal learning. Each device trains the model by using own local data, and calculates a gradient value h 1 ,h 2 ,……,h m And transmits these gradients back to the central server. When the central server receives gradient information from each client, it averages the gradients using federal averaging policies to update the global model. Meanwhile, the central server performs node screening based on gradient difference, and distributes updated model parameters to selected nodes, wherein the nodes participate in the next round of model updating. This process continues until a preset number of iterations is reached, a specific performance index is met, or any other central server defined stop condition is met.
Referring to fig. 2, the present invention performs client node selection in the environment of federal learning based on gradient differences, and specifically includes the following steps:
step A: system initialization
The central server announces federal learning tasks and invites clients to participate, and each client decides whether to respond to the request according to the configuration, if so, receives initial model parameters from the server.
And (B) step (B): client data preprocessing
Each client divides the data it holds into a training data set and a test data set, and then performs normalization and batch processing on the data to facilitate model training.
Step C: local model training
Each client trains on the local data using the CNN model, calculates model gradients, and uploads these gradients to the central server.
Step D: node selection based on gradient differences
The server receives gradient information from each client, performs node screening based on gradient differences, and the selected nodes participate in the model updating of the next round.
Step E: global model update
The server aggregates and updates the global model according to the gradient information of the selected client and sends updated model parameters back to the selected client.
Step F: model performance assessment
After each global update, the server evaluates the performance of the model, and when the performance standard or the iteration number which is preset is reached, the training process is terminated.
The step A further comprises the following steps:
step A1: the central server broadcasts the modeling inviting each client to participate in the federal learning process.
Step A2: after receiving the invitation, the client can decide whether to participate in the modeling task, and the client decided to participate will agree with the server and log in to the central system through a secure connection.
Step A3: the server will distribute preliminary model parameters for the participating clients, each of which then downloads and initializes its local model.
The step B further comprises the following steps:
step B1: each client first examines and collates its local data, which may be collected by different data sources or devices, to ensure that they are complete and continuous.
Step B2: each client divides its data into two parts: training data sets and test data sets to ensure that the model can be validated and tested at a later stage.
The MNIST and CIFAR-10 datasets are selected herein as image classification datasets, specifically, the MNIST dataset consists of handwritten numbers, while CIFAR-10 consists of 60000 32x32 color images of 10 classes. Performing iid and non-iid segmentation on the MNIST and CIFAR-10 data set, wherein the iid segmentation is that the data types on each node are uniform, and the independent identical distribution condition is satisfied; the non-iid segmentation, i.e. the data set is distributed evenly in number on each node, but the data type on each node may not be uniform; the number of client nodes is set to 10.
Step B3: in order to make the data have comparability among different clients and ensure the stability of the model, each client can perform standardized processing on the data, such as adjusting the data to be in the range of 0-1;
step B4: at the final stage of data preprocessing, the client organizes the data into a format suitable for deep learning model processing, such as reorganizing the data into a series of batches, each batch including the input data and corresponding tags.
The step C further comprises:
step C1: model construction
The structure of constructing the CNN model is as follows: the model consists of a plurality of convolution layers and a full connection layer, wherein the input feature number received by the first convolution layer of the model is 1, the output feature number is 32, a convolution kernel of 5x5 is used, the step length is 1, no filling is performed, and a ReLU activation function and a 2x2 maximum pooling layer are matched; the second convolution layer converts the 32 feature maps to 64 feature maps, again using a convolution kernel of 5x5, step size of 1, no padding, and a maximum pooling layer with a ReLU activation function and 2x 2. The feature map processed by the two convolution layers is flattened and then input into a full connection layer. The fully connected layer has an input dimension of 1024 and an output dimension of 512 and uses a ReLU activation function. Followed by another fully connected layer, converting 512 features into 10 categories.
Step C2: gradient computation of model
Each client transmits own data into the CNN model and trains locally, and after training is completed, the gradient of the model is calculated and uploaded to a central server.
The step D further includes:
step D1: receiving gradients of training models
The central server receives the gradient of its local training model from each client.
Step D2: the central server calculates the degree of difference based on gradient information between each pair of clients
A difference matrix is obtained by comparing the gradients of the two clients and calculating the L2 norm between them, where each element represents the difference between the two clients. For each pair of clients i and j in the set of clients, the gradient variability D (i, j) is calculated as defined by the following equation (a):
D(i,j)=||G i -G j || 2 (a)。
where G and G are model gradients of clients i and j, respectively, which represent the L2 norms of the two client model gradients.
Step D3: random greedy node selection based on degree of variance
In each iteration, the central server randomly selects a group of clients and selects a client with the smallest difference from the selected client set from the group of clients, and the method ensures that the selected client set is representative and keeps gradient diversity.
A subset S is randomly selected from all clients and then greedy node selection is performed. For each client k in the subset S, its average degree of difference from the set of selected clients C is calculated by the following equation (b):
wherein D is avg (k) Is the average degree of difference between client k and the selected set of clients C.
Then find D from subset S avg Minimum client k * . Will k * To the selected client set C. The greedy selection step is repeated until a predetermined number of clients or other stopping criteria are met.
Step D4: returning selected clients
The central server returns the selected set of clients to the system that will participate in the next round of model updates.
Referring to fig. 3, the CNN model according to the present invention further includes: the model consists of a plurality of convolution layers and a full connection layer, wherein the input feature number received by the first convolution layer of the model is 1, the output feature number is 32, a convolution kernel of 5x5 is used, the step length is 1, no filling is performed, and a ReLU activation function and a 2x2 maximum pooling layer are matched; the second convolution layer converts the 32 feature maps to 64 feature maps, again using a convolution kernel of 5x5, step size of 1, no padding, and a maximum pooling layer with a ReLU activation function and 2x 2. The feature map processed by the two convolution layers is flattened and then input into a full connection layer. The fully connected layer has an input dimension of 1024 and an output dimension of 512 and uses a ReLU activation function. Followed by another fully connected layer, converting 512 features into 10 categories.
Compared with the traditional federal learning method, the method mainly focuses on optimizing the node selection flow and improving the efficiency of the whole model. The method particularly introduces a consideration of gradient difference, and aims to solve the problems of uneven data distribution and uneven node data quality which are common in federal learning, ensure that nodes with more representative and more critical data can be preferentially selected in each stage of federal learning, further, the method is also helpful for accelerating the convergence of a model to a better state, and simultaneously ensure the efficient expression of the model. Experimental results show that the method can improve model training efficiency and accuracy.
The invention is further described with reference to the following claims, which are not intended to limit the scope of the invention.

Claims (5)

1. The federal learning node selection method based on gradient difference is characterized in that a lightweight convolutional neural network CNN is adopted as a learning model, nodes which positively contribute to a global model are screened out by analyzing gradient information of a client model, and the client node selection specifically comprises the following steps:
step A: system initialization
The central server announces federal learning tasks and invites clients to participate, and each client decides whether to respond to the request according to the configuration, if so, initial model parameters are received from the server;
and (B) step (B): client data preprocessing
Each client divides the data held by the client into a training data set and a testing data set, and then performs standardization and batch processing on the data;
step C: local model training
Each client uses a CNN model to train on local data, calculates model gradients, and uploads the gradients to a central server;
step D: node selection based on gradient differences
The server receives gradient information from each client, performs node screening based on gradient difference, and the selected nodes participate in model updating of the next round;
step E: global model update
The server aggregates and updates the global model according to the gradient information of the selected client, and sends updated model parameters back to the selected client;
step F: model performance assessment
After each global update, the server evaluates the performance of the model, and when a predetermined performance criterion or number of iterations is reached, the training process is terminated.
2. The method for selecting a federal learning node based on gradient difference according to claim 1, wherein the step a specifically comprises the steps of:
step A1: the central server broadcasts modeling to invite each client to participate in the federal learning process;
step A2: after receiving the invitation, the client decides whether to participate in the modeling task at this time, and the client decided to participate will agree with the server and log in to the central system through the secure connection;
step A3: the server distributes preliminary model parameters for the participating clients, each client downloads these parameters and initializes its local model.
3. The method for selecting a federal learning node based on gradient differences according to claim 1, wherein the step B specifically comprises the steps of:
step B1: each client side checks and sorts the local data of the client side, so that the integrity and continuity of the data are ensured;
step B2: each client divides its local data into a training data set and a test data set;
step B3: each client performs standardization processing on local data, namely, the data is adjusted to be within the range of 0-1;
step B4: the client organizes the data into a format suitable for deep learning model processing, i.e., reorganizes the data into a series of batches, each batch including the input data and a corresponding tag.
4. The method for selecting a federal learning node based on gradient differences according to claim 1, wherein said step C specifically comprises the steps of:
step C1: adopting a CNN model formed by two convolution layers and two full-connection layers, flattening the feature map processed by the two convolution layers, inputting the feature map into one full-connection layer, wherein the input dimension is 1024, the output dimension is 512, and using a ReLU activation function; another fully connected layer converts 512 features into 10 categories; the input feature number received by the first convolution layer is 1, the output feature number is 32, a convolution kernel of 5x5 is used, the step length is 1, no filling is performed, and a ReLU activation function and a 2x2 maximum pooling layer are matched; the second convolution layer converts the 32 feature maps into 64 feature maps, and a convolution kernel of 5x5 is used, the step length is 1, no filling is performed, and a ReLU activation function and a 2x2 maximum pooling layer are matched;
step C2: each client uploads own data to the CNN model, performs training locally, calculates the gradient of the model after the training is completed, and uploads the gradient to the central server.
5. The method for selecting a federal learning node based on gradient differences according to claim 1, wherein said step D specifically comprises the steps of:
step D1: the central server receives the gradient of the local training model from each client;
step D2: the central server calculates the degree of difference based on the gradient information between each pair of clients, compares the gradients of the two clients and calculates the L2 norm between the two clients, thereby obtaining a degree of difference matrix, wherein each element represents the degree of difference between the two clients;
step D3: the method comprises the steps that random greedy node selection is conducted based on the difference degree, in each iteration, a group of clients are randomly selected by a central server, and a client with the smallest difference from a selected client set is selected from the group of clients;
step D4: the central server returns the selected set of clients to the system that will participate in the next round of model updates.
CN202311399520.XA 2023-10-26 2023-10-26 Federal learning node selection method based on gradient difference Pending CN117389734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311399520.XA CN117389734A (en) 2023-10-26 2023-10-26 Federal learning node selection method based on gradient difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311399520.XA CN117389734A (en) 2023-10-26 2023-10-26 Federal learning node selection method based on gradient difference

Publications (1)

Publication Number Publication Date
CN117389734A true CN117389734A (en) 2024-01-12

Family

ID=89464527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311399520.XA Pending CN117389734A (en) 2023-10-26 2023-10-26 Federal learning node selection method based on gradient difference

Country Status (1)

Country Link
CN (1) CN117389734A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118075914A (en) * 2024-04-18 2024-05-24 雅安数字经济运营有限公司 NVR and IPC automatic wireless code matching connection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118075914A (en) * 2024-04-18 2024-05-24 雅安数字经济运营有限公司 NVR and IPC automatic wireless code matching connection method

Similar Documents

Publication Publication Date Title
CN111563275B (en) Data desensitization method based on generation countermeasure network
CN109583474B (en) Training sample generation method for industrial big data processing
CN106780639B (en) Hash coding method based on significance characteristic sparse embedding and extreme learning machine
CN117389734A (en) Federal learning node selection method based on gradient difference
CN114943345B (en) Active learning and model compression-based federal learning global model training method
CN110516537B (en) Face age estimation method based on self-learning
CN110110845B (en) Learning method based on parallel multi-level width neural network
CN117523291A (en) Image classification method based on federal knowledge distillation and ensemble learning
CN111488498A (en) Node-graph cross-layer graph matching method and system based on graph neural network
Wei et al. Non-homogeneous haze removal via artificial scene prior and bidimensional graph reasoning
Liu et al. Evaluation framework for large-scale federated learning
CN118036706A (en) Multitasking system for realizing graph federation migration learning based on graph subtree difference
Zhang et al. Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data
CN115577797B (en) Federal learning optimization method and system based on local noise perception
CN117217328A (en) Constraint factor-based federal learning client selection method
CN112528554A (en) Data fusion method and system suitable for multi-launch multi-source rocket test data
CN114819181A (en) Multi-target federal learning evolution method based on improved NSGA-III
CN115587297A (en) Method, apparatus, device and medium for constructing image recognition model and image recognition
CN112598060A (en) Multi-view clustering method and device based on self-walking learning
CN116778208B (en) Image clustering method based on depth network model
CN112862737B (en) Infrared image quality enhancement method based on self-supervision texture feature guidance
Lin et al. An attention-based ambient network with 3D convolutional network for incomplete traffic flow prediction
CN114332460B (en) Semi-supervised single image rain removing processing method
CN117668747A (en) Hypergraph-based multi-mode data fusion network method
CN117557870B (en) Classification model training method and system based on federal learning client selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination