CN114819181A - Multi-target federal learning evolution method based on improved NSGA-III - Google Patents

Multi-target federal learning evolution method based on improved NSGA-III Download PDF

Info

Publication number
CN114819181A
CN114819181A CN202210396629.7A CN202210396629A CN114819181A CN 114819181 A CN114819181 A CN 114819181A CN 202210396629 A CN202210396629 A CN 202210396629A CN 114819181 A CN114819181 A CN 114819181A
Authority
CN
China
Prior art keywords
iii
learning
nsga
model
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210396629.7A
Other languages
Chinese (zh)
Inventor
马武彬
钟佳淋
王翔汉
谢宇晗
吴亚辉
周浩浩
刘梦祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202210396629.7A priority Critical patent/CN114819181A/en
Publication of CN114819181A publication Critical patent/CN114819181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the field of artificial intelligence, and discloses a multi-target federated learning evolution method based on improved NSGA-III, which comprises the following steps: acquiring learning data; constructing a federal learning multi-objective optimization model; performing rapid greedy initialization and performing multi-target evaluation; performing non-dominant sequencing; performing iteration, wherein each iteration performs the following operations: selection, crossover, mutation operator operations, producing Q t Carrying out federal learning training evaluation on the offspring population, and calculating to obtain three targets of each individual; mixed parent and child populations R t =Q t +P t (ii) a To R t Non-dominant ordering and selection of P by reference point t+1 (ii) a Finding out the pareto optimal solution, and outputting a labeling result corresponding to the pareto optimal solution. Compared with the classical algorithm, the invention can obtain better pareto solution inAnd under the condition of ensuring the accuracy of the global model, the communication cost and the accuracy distribution variance of the global model are reduced.

Description

Multi-target federal learning evolution method based on improved NSGA-III
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a multi-target combined learning evolution method based on improved NSGA-III.
Background
The rapid development of artificial intelligence brings great convenience to society and also brings hidden dangers such as data island and privacy disclosure. Traditional centralized machine learning needs to gather scattered data together for machine learning training, but actually, data in many fields are difficult to gather together, for example, data is difficult to share among hospitals, and a serious data island problem exists. In addition, the problem of privacy disclosure appears, people's awareness of privacy protection is gradually improved, and the laws and regulations for privacy protection are also established in various countries in the world.
Accordingly, federal learning should be born as a viable solution to the problem of data islanding and privacy disclosure. Federal learning can train out a good global model while leaving the data local to the participants. After each participant downloads the current global model from the server, the current global model is trained in local data, then the trained local model is uploaded to the server, the models are aggregated and updated, and a global model with good performance is obtained through multiple iterations.
However, traditional federal learning still faces the challenges of high communication costs and structural heterogeneity. The transmission of parameters between the server and participants for the federated learning consumes a large amount of communication costs; meanwhile, due to different computing and storing capacities and different network environments among different participants, the participants can have the situations of off-line, loss of transmission model parameters and the like during training, and accordingly federal learning efficiency, accuracy, fairness and the like are affected. There are many documents currently devoted to the study of communication costs or structural heterogeneity, but there are few general considerations of these issues.
Aiming at the problems, the invention comprehensively considers the problems of communication cost and structural heterogeneity and researches the balance problem among model effectiveness, fairness and communication cost. The method firstly defines the federal learning as a three-target optimization model, and aims to simultaneously maximize the accuracy of the global model, minimize the accuracy distribution variance of the global model and the communication cost. On the basis of combining the characteristics of federal learning training, the initialization of a 3 rd generation non-dominant sequencing genetic algorithm (NSGA-III) is improved, and a multi-target federal learning-oriented NSGA-III algorithm (FNSGA-III) based on rapid greedy initialization is designed. Experimental results show that the FNSGA-III algorithm can realize the balance of three targets, and can effectively reduce the communication cost and the variance of the accuracy rate distribution of each participant under the condition of keeping the overall performance of the FL model not to be seriously lost, so that the accuracy rate distribution of the participants is more balanced. The work done by the present invention mainly includes:
(1) according to the knowledge of an author, the method comprehensively considers the targets of overall model accuracy maximization, overall model accuracy distribution variance minimization, communication cost minimization and the like for the first time, and constructs the federal learning multi-target optimization model.
(2) The FNSGA-III algorithm is proposed. In order to quickly converge the NSGA-III algorithm and obtain a high-quality solution and be more convenient to solve the multi-objective optimization model for the federated learning, an initial solution construction algorithm based on quick greedy initialization is provided, a binary system and real value coding and decoding strategy is introduced, and the NSGA-III algorithm evolution efficiency is accelerated.
(3) Through MNIST data set experiments, Pareto solutions obtained by the FNSGA-III algorithm are proved to be superior to the NSGA-III algorithm, the maximum value of the super-volume HV index value of the FNSGA-III algorithm can reach 127.55% of that of the NSGA-III algorithm, and the running time of the FNSGA-III algorithm is 73.96% of that of the NSGA-III algorithm under the optimal condition. In addition, the FNSGA-III algorithm is compared with other classical evolutionary algorithms NSGA-II and SPEA2, and the result shows that the Pareto solution quality obtained by the FNSGA-III algorithm is higher. And finally, selecting part of Pareto solutions to perform a joint learning experiment, wherein the Pareto solutions obtained by the algorithm can effectively reduce the communication cost and the accuracy distribution variance of the global model under the condition of ensuring the accuracy of the global model.
In recent years, federal learning has received a wide attention. McMahan first proposed concept and Federal Averaging algorithm (FedAvg) of Federal learning in 2016, and the algorithm has important application significance for privacy protection and machine learning training under data islands. Federal learning is being intensively studied, but it still has some challenges that are not overcome, such as high communication cost, structural heterogeneity, and the like.
In order for federal learning to be applicable to massive data as well, consideration must be given to reducing the communication overhead of federal learning. The FedAvg algorithm proposed by McMahan et al reduces the global communication round by increasing the local training computation in each round of communication, thereby improving communication efficiency. There are also trainees who reduce the amount of communication traffic by reducing the size of the parameters uploaded by the participants. Chen et al propose a hierarchical asynchronous update algorithm, the author stratifies the parameter into shallow parameter and deep parameter according to the structural feature of the deep neural network model, in the former global communication iteration process, only transmit the shallow parameter between local participant and the server, and transmit, aggregate and update the deep parameter of the global model only in the last several rounds of communication, this algorithm reduces the communication overhead by reducing the size of the transmission model parameter and reducing the update frequency of the deep parameter in the neural network, the deficiency is that the accuracy of the model can be influenced. Zhu et al introduce Sparse evolutionary Training algorithm (SET) into federal learning, and the main idea of the SET algorithm is to control the connection sparsity between fully connected networks through sparsity parameters between fully connected layers of a neural network, so as to reduce the size of transmission model parameters and effectively reduce communication cost.
In addition to communication costs, structural heterogeneity is one of the major issues for federal learning optimization. Due to different computing and storing capacities and different network environments among different participants, the participants can be off-line during training, parameters of a transmission model are lost, and the like. To enhance the robustness of federal learning, various studies have been made by scholars on the problem of structural heterogeneity. Hao et al designs a safety aggregation protocol to allow participants to exit at any time, and as long as the number of the remaining participants can meet federal learning update, the fault tolerance rate and robustness of the system are improved. Researchers have studied how to reasonably allocate heterogeneous device resources, and Kang et al considered the overhead differences of participants to encourage more high-quality participants to perform federal learning training. Li et al use the global model accuracy variance as a fairness metric to design a q-FFL (q-Fair fed learning) optimization algorithm, which increases the model aggregation weight of high-loss participants.
The research is carried out on one aspect of communication cost or structural heterogeneity, optimization of the Federal learning algorithm is achieved on different degrees and different targets, but the Federal learning has requirements on the accuracy, fairness, communication cost and the like of a model under more application conditions, and in order to achieve balance of multiple targets under the Federal learning framework, a student tries to combine the intelligent optimization algorithm with the Federal learning. Zhu et al defines federal learning as a dual-objective optimization problem, aims at minimizing the error rate of model test and communication cost, optimizes the structure parameters of the federal learning neural network by using an NSGA-II (non-synthesized learned synthetic algorithm-II) algorithm, improves the model performance and communication efficiency to a certain extent simultaneously by the advanced Pareto solution of the algorithm compared with the standard FedAvg algorithm, but does not consider other situations such as unstable communication and unbalanced accurate distribution among participants caused by the structural heterogeneity problem of federal learning, and the used NSGA-II algorithm has poor expansibility on a multi-objective federal learning model. Basheer et al use a particle swarm algorithm to update the number of hidden layers, the number of neurons, and the round of communications for the optimal federated learning neural network, but the optimization goal is a single goal without comprehensively considering other goals of federated learning.
Disclosure of Invention
Aiming at the problems, the invention comprehensively considers the problems of communication cost and structural heterogeneity, introduces fairness as an optimization target, and explores the multi-target equal-balance relation of the model accuracy, fairness and communication cost of federal learning. And the communication environment is set to be unstable in the experiment, so that the robustness of the algorithm is enhanced.
The invention provides an improved NSGA-III-based multi-target federated learning evolution method, which is applied to a server and a plurality of participants and comprises the following steps:
acquiring learning data, wherein the learning data is used for labeling;
constructing a multi-objective optimization model for federal learning, wherein the multi-objective optimization model comprises three objectives of maximizing the accuracy of a global model, minimizing the distribution variance of the accuracy of the global model and communication cost;
fast greedy initialization P t Multi-target evaluation was performed using the FedAvg algorithm, 0;
to P t Performing non-dominant sequencing;
performing iteration, wherein each iteration performs the following operations: selection, crossover, mutation operator operations, producing Q t Carrying out federal learning training evaluation on the offspring population, and calculating to obtain three targets of each individual; mixed parent and child populations R t =Q t +P t (ii) a To R t Non-dominant ordering and selection of P by reference point t+1
Finding out the Pareto optimal solution, and outputting a labeling result corresponding to the Pareto optimal solution.
Further, in the course of federal learning evolution, the data set is owned as D k The loss function for the kth participant of (1) is:
Figure BDA0003597495160000051
the global goal of the federated learning evolution approach is to minimize the global loss function l (w) as follows:
Figure BDA0003597495160000052
where k is the participant's serial number, L k (w) is the loss function of the kth participant,/ i (w) is a loss function on data sample i, n k Data set D for participant k k Size n k =|D k And n is the total size of the data samples of the K participants.
Further, during each round of training of the federal learning process, the various participants receive the global model w from the server t And training the global model by using the local data to obtain an updated local model
Figure BDA0003597495160000053
Then the participant sends the updated local model to the server, and the server aggregates the models according to a certain rule to obtain a new global model w t+1 For the next round of iterative training, the subscript t represents the federally learned communication round.
Further, the three-target optimization model of the federal learning evolution is as follows:
Figure BDA0003597495160000061
wherein F (v) is an objective function of a three-objective optimization model having 3 minimization objectives, each of which is to minimize the global model test error rate f 1 Global model accuracy distribution variance f 2 Communication cost f 3 Conv is the number of convolution layers, kc is the number of convolution kernels, ks is the size of convolution kernels, L is the number of fully-connected layers, N is the number of fully-connected layer neurons, eta is the learning rate, epsilon is the connectivity parameter of the neural network, the number of connections between two fully-connected layers is determined by the connectivity parameter epsilon, and the total number of connections is N ═ epsilon (N ═ epsilon) k +n k-1 ) Wherein n is k And n k-1 The number of neurons in the k-layer and k-1 layer, respectively.
Further, target f 1 The global model test error rate E is 1-A, A is the global model average testRate of accuracy
Figure BDA0003597495160000062
{a 1 ,a 2 ,...,a K The accuracy of each participant.
Further, target f 2 For global model accuracy distribution variance
Figure BDA0003597495160000063
Further, target f 3 Can be expressed as
Figure BDA0003597495160000064
K is the total number of participants, C is the participation proportion of each round of participants, and sigma is the size of the model parameter.
Further, the process of performing federal learning training evaluation on the offspring population is implemented by a static SET-based FedAvg algorithm, and specifically includes the following steps:
i is an individual of a population in the FNSGA-III algorithm, P is the population scale, and after the individual i is decoded, the related federal learning neural network hyper-parameter, the connectivity of the neural network and the participation ratio C of each round of participants are obtained i
Using a link communication parameter epsilon i Initializing a static SET topology, and taking the static SET topology as a global model used in an algorithm;
in each round of training process, training local data by using a small batch random gradient descent method;
after a certain number of turns, three targets of the test error rate of the global model, the accuracy distribution variance of the global model and the communication cost are calculated.
Further, the number of convolution layers, the number of convolution kernels, the size of the convolution kernels, the number of full-link layers, the number of neurons in each layer of the full-link layers and the SET parameter epsilon of the MLP and the CNN are coded by using binary, and the learning rate eta and the participation ratio C in each round are coded by using real values.
Further, the learning data is an MNIST data set.
The invention has the following beneficial effects:
the invention provides a FNSGA-III algorithm to solve the problem of a multi-target federal learning model and carry out experimental verification under the condition of unstable communication. Firstly, a three-target template for federal learning is constructed, an optimization target is set to minimize the test error rate of a global model, the communication cost and the accuracy rate distribution variance of the global model, and decision variables are hyper-parameters of a neural network and federal learning parameters. An NSGA-III algorithm is introduced to solve the federal learning multi-target model, the initialization of the NSGA-III is changed, and experimental results show that the improved FNSGA-III algorithm is superior to the original NSGA-III algorithm. And the Pareto optimal solution obtained by the FNSGA-III algorithm optimization is compared with the standard federal average algorithm, so that the accuracy of the global model is effectively improved, and the accuracy distribution variance and the communication cost of the global model are reduced.
Drawings
FIG. 1 is a federated learning training process diagram;
FIG. 2 is a diagram of an example of chromosomal coding of the MLP model of the invention;
FIG. 3 is a diagram of an example of chromosomal coding for a CNN model of the invention;
FIG. 4 is an algorithmic flow chart of the present invention;
FIG. 5 is a graph of the MLP over IID experimental comparison of the present invention to the NSGA-III algorithm;
FIG. 6 is a graph comparing the MLP of the present invention and NSGA-III algorithm on non-IID experiments;
FIG. 7 is a graph of CNN experimental comparisons on IID for the NSGA-III algorithm of the present invention;
FIG. 8 is a graph comparing the CNN of the present invention with that of the NSGA-III algorithm in non-IID experiments;
FIG. 9 is a Pareto optima of the present invention, NSGA-II and SPEA 2.
Detailed Description
The invention is further described with reference to the accompanying drawings, which are not intended to be limiting in any way, and any alterations or substitutions based on the teachings of the invention are intended to fall within the scope of the invention.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
in order to make the technical solutions and advantages of the present invention clearer, the present invention is further described below with reference to practical examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
Federal learning is a privacy preserving machine learning technique that allows participants to jointly train a global model without uploading local privacy data to a server. Assuming K participants, the data is { D } 1 ,D 2 ,…,D K Traditional centralized learning is to put all data together, and use D ═ D 1 ∪D 2 …D K To train the model.
In the Federal learning Process, the possession data set is D k The loss function for the kth participant of (1) is:
Figure BDA0003597495160000081
the federal learning global objective is to minimize the global loss function l (w):
Figure BDA0003597495160000082
in the formulae (1) and (2), k is the participant number, L k (w) is the loss function of the kth participant,/ i (w) is a loss function on data sample i, n k Data set D for participant k k Size n k =|D k And n is the total size of the data samples of the K participants. The goal of federal learning is to minimize the participant loss function L k (w) to optimize the global loss function l (w). Federal learning is a collaborative process, as shown in FIG. 1.
In each round of training, each participant receives the global model w from the server t And training the global model by using local data to obtain an updated local model
Figure BDA0003597495160000091
Then the participant sends the updated local model to the server, and the server aggregates the models according to a certain rule to obtain a new global model w t+1 And the training is used for the next round of iterative training. The subscript t represents the federally learned communication turn. 3 rd generation non-dominated sorting genetic algorithm (NSGA-III)
Many research results of Multi-objective optimization Evolutionary Algorithm based on genetic Algorithm and Pareto optimal solution exist, such as 2 nd generation non-dominated sorting genetic Algorithm (NSGA-II), Multi-objective Evolutionary Algorithm based on decomposition (MOEA/D), SPEA2(Strength Pareto evolution Algorithm 2), and PAESPareto evolved evolution Algorithm). NSGA-II is a powerful, robust multi-objective evolutionary algorithm, suitable for problems with two or three objectives. If the number of targets is greater than 3, a newer evolutionary algorithm, such as the 3 rd generation non-dominated sorting genetic algorithm based on reference points (NSGA-III), can be employed, with NSGA-III performing better than NSGA-II on optimization problems for four or more targets. The invention defines the federal learning as a three-target optimization problem model, in order to ensure the expandability of the algorithm target, for example, when the target for increasing the federal learning is expanded to four or more, the algorithm is still applicable, the invention adopts the NSGA-III algorithm, and the basic steps can be summarized as follows:
step 1 initialize reference points and parent population P t (the population size is N), and carrying out non-dominated sorting, individual normalization and association on the individuals in the population to a reference point.
Step 2 at P t Using selection, crossover and mutation operators to create a parent population P t Same-sized offspring population Q t
Step 3 mixing of P t And Q t To form a new population R t Wherein R is t The population size was 2N. The consolidated population is sorted non-dominantly and divided into different non-dominating solution sets (F) 1 ,F 2 ,...,F s ) And performing individual normalizationAnd transforming and associating to a reference point.
Step 4 from the sorted population R t N solutions are selected to generate a next generation parent population P t+1 . And if the number of the selected non-dominant front edge solutions is greater than N, selecting the solution by adopting a reference point-based selection method in the last layer of non-dominant solution set to be selected.
And 5, turning to the step 2, repeating the whole process until a preset stopping condition is met, and outputting a Pareto optimal solution.
The method constructs a three-target optimization model of federal learning on the basis of a typical multi-target optimization model, and expounds the target, decision variables, variable codes and the like of the three-target optimization model. The three-target optimization model for federal learning is as follows:
Figure BDA0003597495160000101
(1) an objective function:
f (v) is an objective function of the model, which has 3 minimization objectives, each of which is to minimize the overall local model test error rate f 1 Global model accuracy distribution variance f 2 Communication cost f 3
In the invention, the specific evaluation process of three targets of federal learning comprises the steps of training a certain communication turn by using a FedAvg algorithm and combining a SET algorithm, testing a trained global model w and obtaining the accuracy rate { a ] of each participant 1 ,a 2 ,...,a K }. Calculating to obtain the average test accuracy of the global model
Figure BDA0003597495160000102
Thus, the target f can be calculated 1 The global model test error rate E is 1-a.
Target f 2 For global model accuracy distribution variance
Figure BDA0003597495160000111
Variance can be regarded as a measure of fairnessFirstly, fairness is taken as an optimization target, and the situation that the average precision is high but the accuracy of a single participant is not guaranteed can be avoided as much as possible. By reducing the variance of the accuracy distribution of the global model among the participants, the accuracy distribution of the aggregated participants is more uniform and fair.
Target f 3 Assuming that the communication cost of each participant is only related to the size of the model parameter transmitted by the participant for the average communication cost of the participants, since the neural network structure used in the invention is not changed in the federal learning training process, the size σ of the model parameter of each participant is the same and remains the same, and the target f 3 Can be expressed as
Figure BDA0003597495160000112
K is the total number of participants, C is the participation proportion of each round of participants, and sigma is the size of the model parameter.
(2) Model decision variables and their constraints
Since Federal learning is the process of collaborative training of machine learning models, in the present invention, decision variables and constraint conditions are the parameters and their ranges to be optimized, denoted by v. The parameters set by the user to be optimized comprise three parts, namely federally learned neural network hyper-parameters, neural network connectivity parameters epsilon and participation ratios C of each round.
The neural network selects a Multilayer perceptron (MLP) and a Convolutional Neural Network (CNN), wherein the hyper-parameters under the MLP model comprise the number L of hidden layers, the number N of neurons of each hidden layer and a learning rate eta; the CNN hyper-parameters comprise convolution layer number Conv, convolution kernel number kc, convolution kernel size ks, full connection layer number L, full connection layer neuron number N and learning rate eta. That is, v ═ Conv, kc, ks, L, N, epsilon, η, C, and the value ranges of the respective variables were set in the experimental part.
The connectivity parameter epsilon of the neural network refers to a static SET algorithm in the SET algorithm proposed by Mocanu, namely, an ER random graph is used between two fully-connected layers to initialize a sparse weight matrix, and then the topology of the network is kept fixed. The number of connections between two layers is defined by the parametersE, the total number of links is n ═ e (n) k +n k-1 ) Wherein n is k And n k-1 The number of neurons in the k-layer and k-1 layer, respectively. In the present invention, the static SET algorithm is used in the fully connected layer of MLP and CNN.
(3) Decision variable coding
We adopt the FNSGA-III algorithm to optimize the neural network hyper-parameters, the connectivity parameters epsilon and the participation ratio C of each round of federated learning. Chromosomes are the subject of arithmetic operations, and in the present invention we have both integer and real types of decision variables to encode, where all integers are encoded using binary and real numbers are encoded using real values. The number of convolution layers, the number of convolution kernels, the size of the convolution kernels, the number of full-link layers, the number of neurons per layer of the full-link layers and the SET parameter epsilon of the MLP and CNN are encoded using binary, and the learning rate eta and the participation ratio C per round are encoded using real values. Examples of the encoding of MLP and CNN are shown in fig. 2 and 3.
In the decoding process, 1 is automatically added during binary decoding, for example, 000000 is decoded into 1, in MLP case, N 1 Encoded to 000111 and decoded to N 1 8. For convenience, the convolution kernel size in CNN is only chosen between 3 and 5, and the convolution output is always kept constant, then in the neural network structure of CNN, only one pooling layer is added at the end.
In order to accelerate the search performance, the invention improves the random initialization in NSGA-III into quick greedy initialization suitable for federal learning, and records the improved NSGA-III algorithm as FNSGA-III algorithm. The rapid greedy initialization process is briefly described as follows:
(1) randomly generating l times of population initial solution;
(2) after all participants were randomly divided into groups of the same size, the federal learning training evaluation process of the initial solution was synchronized within each group. The number of participants in each round of federal learning, the local training round and the global communication round are reduced, three targets after federal learning training can be obtained quickly, and evaluation on all initial solutions is completed;
(3) then, respectively selecting one-time population solutions with three optimal targets;
(4) and after repeated solutions are mixed and removed, randomly selecting a solution with a specified population quantity from the repeated solutions.
FNSGA-III algorithm flow
For the three-target optimization model of federal learning, the FNSGA-III algorithm is used for solving to obtain a group of Pareto optimal solutions, and the flow chart of the algorithm is shown in FIG. 4.
The FNSGA-III first generates an initial population of size N, i.e., a first generation parent population, using rapid greedy initialization, and encodes the corresponding variables into binary and real-valued chromosomes. The iteration process mainly adopts a binary championship to select two parent individuals to generate two child individuals, and the cross mutation algorithm adopts single-point cross and turnover mutation on a binary chromosome and adopts Simulated binary cross (SBX) and polynomial mutation on an actual chromosome. This process is repeated until N offspring individuals are generated.
And then carrying out federal learning training evaluation on the offspring population, and calculating to obtain three targets of each individual. And mixing the parent population and the child population, and performing non-dominant sorting on the mixed population. From these, N individuals were selected as the parent population for the next generation. These steps are repeated until an iteration stop condition is satisfied. And finally obtaining a group of Pareto optimal solutions, and carrying out deep analysis on the solutions.
The specific evaluation process of federal learning is combined with a static SET algorithm on the basis of a FedAvg algorithm. The federal learning assessment procedure pseudocode under the FNSGA-III algorithm is shown as algorithm 1.
In algorithm 1, i is an individual of the population in the FNSGA-III algorithm, and P is the size of the population. Decoding the individual i to obtain related federal learning neural network hyper-parameters, neural network connectivity and participant each round participation ratio C i . First using a link communication parameter epsilon i Initializing a static SET topology as a global model used in an algorithm; during each round of training, the local data was trained using a small batch stochastic gradient descent method (Mini-batch SGD). After a certain turn, calculating the test error rate of the global model and the accuracy distribution of the global modelDifference and communication cost.
Figure BDA0003597495160000131
Figure BDA0003597495160000141
In this section we will introduce the experimental set-up of the present invention. The device mainly comprises the following parts: (1) an experimental environment and an experimental data set; (2) relevant parameters of a neural network and sparse connectivity parameters used in the experiment; (3) parameter and data partitioning mode of federal learning; (4) FNSGA-III parameter.
The experimental environment is based on the Intel (R) core (TM) i9-9900KF CPU @3.60GHz X16 Ubuntu system. Each experiment was trained and tested on a MNIST data set consisting of 28 x 28 pixel handwritten digital images, 60000 training images and 10000 test images.
We selected MLP and CNN as the neural network models for federal learning training and set the standard MLP and CNN parameters of the invention empirically. Among them, there are 2 hidden layers in MLP, 200 neurons per layer (reference number 199210), using ReLu function as activation function. The CNN model sets two 5 × 5 convolutional layers (the first with 32 channels and the second with 64 channels), followed by a2 × 2 Max pooling layer, a full connection layer of 128 neurons, using ReLu activation function, and finally a class 10 softmax output layer (reference 1659146). In MLP and CNN, the learning rate η of the Mini-batch SGD algorithm is 0.05, and the batch size B is 10. The static SET algorithm is used in the fully-connected layers of MLP and CNN, and the invention SETs the network sparsity parameter epsilon to 20. The above parameter settings are used as standard neural network structures in the experiments of the present invention.
In federal learning, we set the total number of participants K to 100 and the participant participation ratio C to 1, i.e., 100 × 1 participants per communication round. For participant local model training, the iteration round epoch is set to 5. Since the size and distribution of data tends to be different between different participants, we have studied the following two reality scenarios: first, the MNIST data is scrambled, so that each of 100 participants contains 600 samples; in the second Non-Independent identity Distribution (Non-IID), the labeled digital labels are first sorted, and are divided into 200 segments on an average of 300 samples, and 100 participants are assigned two segments, so that each participant only has two labels and the data samples are the same size. Because the present invention assumes that the federately learned communication environment is unstable, there is a loss in model parameter transmission, where the loss rate Drop is set to 30%.
Next, we perform parameter setting for FNSGA-III with population size set to 20, iterated 20 times. The selection operator adopts a two-round tournament, single-point cross with the probability of 0.9 and bit-flipping variation with the probability of 0.1 are adopted in the binary chromosome; the probability of 0.9 and n are adopted in the real value chromosome c Analog binary crossing of 2, sum probability 0.1 and n m Polynomial variant of 20.
The invention optimizes related parameters of federal learning by using FNSGA-III algorithm to realize that MLP and CNN are selected as neural network models for federal learning training among global model test error rate, global model accuracy rate distribution variance and communication cost in federal learning, and standard MLP and CNN parameters of the invention are set according to experience. Among these, there are 2 hidden layers in MLP, 200 neurons per layer (reference number 199210), using ReLu function as activation function. The CNN model sets two 5 × 5 convolutional layers (the first with 32 channels and the second with 64 channels), followed by a2 × 2 Max pooling layer, a full connection layer of 128 neurons, using ReLu activation function, and finally a class 10 softmax output layer (reference 1659146). In MLP and CNN, the learning rate η of the Mini-batch SGD algorithm is 0.05, and the batch size B is 10. The static SET algorithm is used in the fully-connected layers of MLP and CNN, and the invention SETs the network sparsity parameter epsilon to 20. The above parameter settings are used as standard neural network structures in the experiments of the present invention.
In federal learning, we set the total number of participants K to 100 and the participant participation ratio C to 1, i.e., 100 × 1 participants per communication round. For participant local model training, the iteration round epoch is set to 5. Since the size and distribution of data tends to be different between different participants, we have studied the following two reality scenarios: first, the MNIST data is scrambled, so that each of 100 participants contains 600 samples; in the second Non-Independent identity Distribution (Non-IID), the labeled digital labels are first sorted, and are divided into 200 segments on an average of 300 samples, and 100 participants are assigned two segments, so that each participant only has two labels and the data samples are the same size. Because the present invention assumes that the federately learned communication environment is unstable, there is a loss in model parameter transmission, where the loss rate Drop is set to 30%.
Next, we perform parameter setting for FNSGA-III with population size set to 20, iterated 20 times. The selection operator adopts a two-round tournament, single-point cross with the probability of 0.9 and bit-flipping variation with the probability of 0.1 are adopted in the binary chromosome; the probability of 0.9 and n are adopted in the real value chromosome c Analog binary crossing of 2, sum probability 0.1 and n m Polynomial variant of 20.
The FNSGA-III algorithm is used for optimizing related parameters of federal learning so as to realize balance among the test error rate of the global model, the accuracy rate distribution variance of the global model and the communication cost in the federal learning. Firstly, a comparison experiment is carried out on the FNSGA-III algorithm and the NSGA-III algorithm, and the effectiveness of the proposed method is explored. The relevant parameter setting of the multi-objective evolutionary algorithm experimental analysis is shown in the specification.
TABLE 1 Multi-objective Federal learning evolutionary Algorithm-related parameter settings
Parameter(s) MLP CNN
Size of population 20 20
Maximum number of iterations 20 20
Federal learning participation ratio 0.1-1 0.1-1
Learning rate 0.01-0.2 0.01-0.2
SET parameter 1-128 1-128
Number of MLP hidden layers 1-4
Number of neurons in hidden layer of MLP 1-256
Number of CNN convolution layers 1-3
Number of CNN convolution kernel channels 1-64
CNN convolution kernel size 3 or 5
Number of CNN full connection layers 1-3
CNN full connectivity layer neuron number 1-256
The population size is set to be 20, the population iterative algebra is set to be 20, and the number of communication rounds of the federal learning evaluation process for each individual is set to be 5. The value range of the participant participation ratio parameter C is set to be 0.1 to 1, so that a certain participant participates in training, the learning rate is 0.01 to 0.2, and the convergence can be influenced by too high learning rate.
In the parameter setting of the neural network, the maximum number of hidden layers of the MLP is 4, and the maximum number of neurons per layer is 256. For the CNN, the maximum number of convolution layers is set to be 3, the maximum number of kernel channels is set to be 64, the maximum number of full-connection layers is set to be 3, the maximum number of neurons in the convolution layers is set to be 256, and the size of a convolution kernel is set to be 3 or 5. The maximum value of the network sparsity parameter is set to 128.
MLP and CNN the final Pareto solutions evolved using the FNSGA-III algorithm and the NSGA-III algorithm on the IID and non-IID data are shown in FIGS. 5-8, where each point represents a solution corresponding to a specific structural parameter in Federal learning. The light color is represented as the Pareto optimal solution obtained under the FNSGA-III algorithm of the invention, and the dark color point is represented as the Pareto optimal solution obtained under the randomly initialized NSGA-III algorithm.
The number of Pareto solutions obtained by FNSGA-III is more stable than that obtained under NSGA-III, wherein NSGA-III has a smaller number of Pareto solutions under CNN, for example, the number of Pareto solutions of NSGA-III under CNN IID is only 3. The light colored solutions in fig. 5-8 dominate the dark colored solutions, i.e., the present invention yields Pareto solutions that are superior to the Pareto solutions of the NSGA-III algorithm, except that a dark colored solution dominates the light colored solution at CNN IID, but the number of solutions at CNN IID is very small. And the dominant effect of the FNSGA-III algorithm under IID is more obvious, the distance difference between the solutions of FNSGA-III and NSGA-III is larger, and the distance difference between the solutions of the two algorithms under non-IID is smaller. Meanwhile, the FNSGA-III is found to be more converged at the inflection point, the solution characteristics at the inflection point are that each target value is small, the solution quality is higher, and the Pareto solution of the FNSGA-III is concentrated at the inflection point, so that the uniformity of the Pareto solution of the FNSGA-III under the MLP is inferior to that of the Pareto solution of the NSGA-III.
In addition, table 2 shows the results of the evaluation indexes associated with the Pareto solutions obtained by FNSGA-III and NSGA-III, and the Pareto solutions of both algorithms were analyzed from multiple dimensions.
The minimum value of the single target reflects the extreme value condition of each target function, and the optimization capability of the algorithm is reflected. As can be seen from Table 2, FNSGA-III yields a minimum global model test error rate, minimum variance, and minimum communication cost for a single target that are substantially less than NSGA-III, e.g., FNSGA-III has a value less than NSGA-III at the minimum global model test error rate and a significant difference of 4.89% at CNN non-IID.
The number of Pareto non-dominated solutions obtained by the FNSGA-III algorithm is stable, and the rest numbers except MLP IID are all larger than NSGA-III. The number of solutions obtained by the two algorithms under MLP is larger than that under CNN on average. The MLP solution amount of NSGA-III is obviously larger than that of CNN, and the change is obvious. The FNSGA-III algorithm is more robust in terms of the number of solutions than the NSGA-III algorithm.
Comprehensive index hyper volume index (HV), the sum of the hyper volumes of the hypercube composed of all non-dominated solutions and the reference point is calculated, which is a comprehensive index for evaluating Pareto solutions, and generally, the larger the HV value, the better the quality of the evaluated Pareto solution. As can be seen from Table 2, the HV value of the FNSGA-III algorithm is always better than that of NSGA-III, showing better quality.
TABLE 2 analysis of the respective indices of the FNSGA-III algorithm and NSGA-III algorithm
Figure BDA0003597495160000191
The coverage rate C (A, B) is calculated by the proportion of the solutions in the solution set B at least dominated by one solution in A, the coincidence degree between the two solution sets is measured, and the larger the index C is, the better the quality of the solution set A is compared with the quality of the solution set B is represented. In table 2, F in C (F, N) is represented as FNSGA-III solution, N is represented as NSGA-III solution, the C (F, N) metric is substantially greater than C (N, F), and the C (F, N) 50% is less than C (N, F) 53% under CNN IID, but the difference between them is small. Overall, the solution for FNSGA-III is superior to the NSGA-III algorithm from a coverage C point of view.
At run time, CNN time consumption is greater than MLP, non-IID time consumption is greater than IID, and NSGA-III time consumption is greater than FNSGA-III, i.e. from a statistical result, FNSGA-III time performance is optimal. According to the analysis on the single target minimum value, the non-dominant solution quantity, the HV index, the C index and the time performance, the Pareto optimal solution obtained by the proposed FNSGA-III algorithm has better quality and is better than the solution of the NSGA-III algorithm.
In addition, the proposed algorithm FNSGA-III is compared with NSGA-II and SPEA2 evolutionary multi-target calculation methods, and the quality of the Pareto optimal solution obtained by the algorithm is evaluated from multiple aspects such as HV comprehensive indexes, the number of Pareto solutions, the coverage rate C, time, single-target optimal solution and the like. In the above experiments, the most similar MLP non-IID of FNSGA-III and Pareto obtained from NSGA-III were selected for comparison experiments of multiple evolutionary algorithms. The results of the experiment are shown in table 3 and fig. 6.
TABLE 3 analysis of various indicators of the FNSGA-III algorithm and other evolutionary algorithms in MLP non-IID experiments
Figure BDA0003597495160000201
For a simple analysis of Table 3, the Pareto solution of FNSGA-III is more focused on the inflection points and dominates the Pareto solution of NSGA-II and SPEA2, i.e., the three targets are better than the NSGA-II and SPEA2 algorithms. Coverage was compared to FNSGA-III. HV values of FNSGA-III, Pareto solution numbers, coverage, minimum under three targets all outperformed the relevant indices of NSGA-II and SPEA 2. In time, SPEA2 has the shortest run time, but FNSGA-III also has a run time comparable to SPEA 2. In conclusion, the FNSGA-III algorithm changes random initialization into rapid greedy initialization, the operation efficiency can be improved, the method is basically superior to NSGA-III, NSGA-II and SPEA2 evolutionary algorithms, and the obtained Pareto has higher quality.
Due to the very small communication round settings during the federal learning assessment process of FNSGA-III, its federal learning performance has not been fully explored. Because of the limitation of computing resources, only the MLP non-IID with the worst accuracy is selected for enhancement experiments. 4 solutions are selected for Pareto optimal solutions of MLP non-IID obtained by the FNSGA-III algorithm, wherein 2 solutions are solutions with small global test error rate, and the other 2 solutions are solutions with inflection points. The 4 solutions were subjected to federal learning training and compared to the standard FedAvg algorithm, with the communication round set to 150 rounds. In addition to increasing communication rounds, each solution was validated under IID and non-IID, and it was investigated whether the resulting non-dominant solution on the IID dataset was still valid on the non-IID dataset, and vice versa. All validation results are listed in table 4.
TABLE 4 experimental data of MLP non-IID solution obtained by FNSGA-III algorithm
Parameter(s) Solution 1 Solution 2 Solution 3 Solution 4 Standard FedAvg
Participation ratio C 0.62 0.473 3 0.281 2 0.753 4 1
Learning rate eta 0.112 5 0.114 4 0.085 0.094 9 0.05
SET parameter ε 95 31 31 3 /
Number of neurons in hidden layer of MLP 173 136 197 208 [200,200]
Cost of communication 57 553 14 330 9 163 3 972 199 210
Test accuracy under non-IID/%) 96.70 96.08 94.81 91.60 95.69
Test accuracy distribution variance under non-IID 4.71 5.79 12.78 22.50 9.86
Test accuracy/% at IID 97.98 97.58 97.45 95.39 97.35
Measurement accuracy distribution variance under IID 1.67 2.12 2.18 3.94 2.24
From the results shown in Table 4, the evolution of the four Pareto solutions of the selected MLP non-IID can be observed as follows. In the experimental results of 4 solutions obtained under MLP non-IID under non-IID data distribution, the sparsity parameter of the solution 4 is 3, the iteration of the accuracy curve is stable, but obviously lower than other cases, which may mean that the accuracy of the model is damaged by excessively sparse neural network; the solution 1 and the solution 2 have better communication cost, accuracy and variance than the result of standard federal learning, and the iteration curve is stable, so that the solution with high quality exists in Pareto solutions obtained by the algorithm FNSGA-III provided by the invention. Of the experimental results under IID data distribution, only the iterative curve of solution 4 is inferior to standard federal learning. Therefore, the solution under MLP non-IID is effective under non-IID, and still has better operation effect when being expanded to IID.
The invention has the following beneficial effects:
the invention provides an FNSGA-III algorithm to solve the problem of a multi-target federal learning model and carry out experimental verification under unstable communication. Firstly, a three-target template for federal learning is constructed, optimization targets are set to minimize the test error rate of a global model, communication cost and the accuracy rate distribution variance of the global model, and decision variables are hyper-parameters of a neural network and federal learning parameters. An NSGA-III algorithm is introduced to solve the federal learning multi-target model, the initialization of the NSGA-III is changed, and experimental results show that the improved FNSGA-III algorithm is superior to the original NSGA-III algorithm. And the Pareto optimal solution obtained by using the FNSGA-III algorithm for optimization is compared with a benchmark federal average algorithm, so that the accuracy of the global model is effectively improved, and the accuracy distribution variance and the communication cost of the global model are reduced.
The word "preferred" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to include either of the permutations as a matter of course. That is, if X employs A; b is used as X; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances.
Also, although the disclosure has been shown and described with respect to one or an implementation, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or a plurality of or more than one unit are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may execute the storage method in the corresponding method embodiment.
In summary, the above-mentioned embodiment is an implementation manner of the present invention, but the implementation manner of the present invention is not limited by the above-mentioned embodiment, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be regarded as equivalent replacements within the protection scope of the present invention.

Claims (10)

1. The multi-target federated learning evolution method based on the improved NSGA-III is applied to a server and a plurality of participants, and is characterized by comprising the following steps:
acquiring learning data, wherein the learning data is used for labeling;
constructing a multi-objective optimization model for federal learning, wherein the multi-objective optimization model comprises three objectives of maximizing the accuracy of a global model, minimizing the distribution variance of the accuracy of the global model and communication cost;
fast greedy initialization P t Multi-target evaluation was performed using the FedAvg algorithm, 0;
to P t Performing non-dominant sequencing;
performing iteration, wherein each iteration performs the following operations: selection, crossover, mutation operator operations, producing Q t Carrying out federal learning training evaluation on the offspring population, and calculating to obtain three targets of each individual; mixed parent and child populations R t =Q t +P t (ii) a To R t Non-dominant ordering and selection of P by reference point t+1
Finding out the Pareto optimal solution, and outputting a labeling result corresponding to the Pareto optimal solution.
2. The multi-objective federated learning evolution method based on modified NSGA-III as claimed in claim 1Characterized in that, in the course of the federal learning evolution, the data set is owned as D k The loss function for the kth participant of (1) is:
Figure FDA0003597495150000011
the global goal of the federated learning evolution approach is to minimize the global loss function l (w) as follows:
Figure FDA0003597495150000012
where k is the participant's serial number, L k (w) is the loss function of the kth participant,/ i (w) is a loss function on data sample i, n k Data set D for participant k k Size n k =|D k And n is the total size of the data samples of the K participants.
3. The multi-objective NSGA-III based federated learning evolution method of claim 1, wherein each participant receives the global model w from the server during each round of the federated learning process t And training the global model by using the local data to obtain an updated local model
Figure FDA0003597495150000021
Then the participant sends the updated local model to the server, and the server aggregates the models according to a certain rule to obtain a new global model w t+1 For the next round of iterative training, the subscript t represents the federally learned communication round.
4. The multi-objective federated learning evolution method based on modified NSGA-III as claimed in claim 1, wherein the three-objective optimization model of federated learning evolution is:
Figure FDA0003597495150000022
wherein F (v) is the objective function of a three-objective optimization model having 3 minimization objectives, each of which is the minimization of the global model test error rate f 1 Global model accuracy distribution variance f 2 Communication cost f 3 Conv is the number of convolution layers, kc is the number of convolution kernels, ks is the size of convolution kernels, L is the number of fully-connected layers, N is the number of fully-connected layer neurons, eta is the learning rate, epsilon is the connectivity parameter of the neural network, the number of connections between two fully-connected layers is determined by the connectivity parameter epsilon, and the total number of connections is N ═ epsilon (N ═ epsilon) k +n k -1 ) Wherein n is k And n k-1 The numbers of neurons in the k-layer and the k-1-layer, respectively.
5. The multi-objective NSGA-III based federated learning evolution method of claim 1, wherein objective f is 1 The global model test error rate E is 1-A, A is the average test accuracy rate of the global model
Figure FDA0003597495150000023
{a 1 ,a 2 ,...,a K The accuracy of each participant.
6. The multi-objective NSGA-III based federated learning evolution method of claim 5, wherein objective f is 2 For global model accuracy distribution variance
Figure FDA0003597495150000024
7. The multi-objective NSGA-III based federated learning evolution method of claim 5, wherein objective f is 3 Can be expressed as
Figure FDA0003597495150000031
K is the total number of participants, C is the participation proportion of each round of participants, and sigma is the size of the model parameter.
8. The multi-target federal learned evolution method based on improved NSGA-III as claimed in claim 1, wherein the process of performing federal learned training evaluation on the offspring population is implemented by a static SET-based FedAvg algorithm, which specifically includes the following steps:
i is an individual of a population in the FNSGA-III algorithm, P is the population scale, and after the individual i is decoded, the related federal learning neural network hyper-parameter, the connectivity of the neural network and the participation ratio C of each round of participants are obtained i
Using a link communication parameter epsilon i Initializing a static SET topology, and taking the static SET topology as a global model used in an algorithm;
in each round of training process, training local data by using a small batch random gradient descent method;
after a certain turn, three targets of the test error rate of the global model, the accuracy rate distribution variance of the global model and the communication cost are calculated.
9. The multi-target federal learning evolution method based on modified NSGA-III as claimed in claim 1, wherein the number of convolution layers, the number of convolution kernels, the size of convolution kernels, the number of full-link layers, the number of neurons in each layer of full-link layers and the SET parameter epsilon of MLP and CNN are encoded using binary, and the learning rate η and the participation ratio C in each round are encoded using real values.
10. The multi-objective NSGA-III based federated learning evolution method of claim 1, wherein the learning data is MNIST data set.
CN202210396629.7A 2022-04-15 2022-04-15 Multi-target federal learning evolution method based on improved NSGA-III Pending CN114819181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210396629.7A CN114819181A (en) 2022-04-15 2022-04-15 Multi-target federal learning evolution method based on improved NSGA-III

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210396629.7A CN114819181A (en) 2022-04-15 2022-04-15 Multi-target federal learning evolution method based on improved NSGA-III

Publications (1)

Publication Number Publication Date
CN114819181A true CN114819181A (en) 2022-07-29

Family

ID=82537275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210396629.7A Pending CN114819181A (en) 2022-04-15 2022-04-15 Multi-target federal learning evolution method based on improved NSGA-III

Country Status (1)

Country Link
CN (1) CN114819181A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689001A (en) * 2024-02-02 2024-03-12 中科方寸知微(南京)科技有限公司 Neural network multi-granularity pruning compression method and system based on zero data search

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689001A (en) * 2024-02-02 2024-03-12 中科方寸知微(南京)科技有限公司 Neural network multi-granularity pruning compression method and system based on zero data search
CN117689001B (en) * 2024-02-02 2024-05-07 中科方寸知微(南京)科技有限公司 Neural network multi-granularity pruning compression method and system based on zero data search

Similar Documents

Publication Publication Date Title
CN101271572B (en) Image segmentation method based on immunity clone selection clustering
CN110544011B (en) Intelligent system combat effectiveness evaluation and optimization method
KR20210040248A (en) Generative structure-property inverse computational co-design of materials
CN113191484A (en) Federal learning client intelligent selection method and system based on deep reinforcement learning
CN112465120A (en) Fast attention neural network architecture searching method based on evolution method
CN110533221A (en) Multipurpose Optimal Method based on production confrontation network
CN113657678A (en) Power grid power data prediction method based on information freshness
Król et al. Investigation of evolutionary optimization methods of TSK fuzzy model for real estate appraisal
CN114819181A (en) Multi-target federal learning evolution method based on improved NSGA-III
CN111832817A (en) Small world echo state network time sequence prediction method based on MCP penalty function
CN115481727A (en) Intention recognition neural network generation and optimization method based on evolutionary computation
CN111709519B (en) Deep learning parallel computing architecture method and super-parameter automatic configuration optimization thereof
CN114004153A (en) Penetration depth prediction method based on multi-source data fusion
CN109697531A (en) A kind of logistics park-hinterland Forecast of Logistics Demand method
CN117093885A (en) Federal learning multi-objective optimization method integrating hierarchical clustering and particle swarm
Chai et al. Correlation Analysis-Based Neural Network Self-Organizing Genetic Evolutionary Algorithm
CN111639797A (en) Gumbel-softmax technology-based combined optimization method
CN114462583A (en) MSP-GEP-Elman algorithm-based real-time short video user portrait prediction method and system
CN113705098A (en) Air duct heater modeling method based on PCA and GA-BP network
Zhan et al. A population prescreening strategy for kriging-assisted evolutionary computation
Dang et al. Hybrid IoT Device Selection With Knowledge Transfer for Federated Learning
Khotimah et al. Adaptive SOMMI (Self Organizing Map Multiple Imputation) base on Variation Weight for Incomplete Data
Zhang et al. Link Value Estimation Based Graph Attention Network for Link Prediction in Complex Networks
Zhang et al. MMPL: Multi-Objective Multi-Party Learning via Diverse Steps
CN118070152A (en) Model-level interpretation graph generation method for graph deep learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination