CN117787440A - Internet of vehicles multi-stage federation learning method for non-independent co-distributed data - Google Patents

Internet of vehicles multi-stage federation learning method for non-independent co-distributed data Download PDF

Info

Publication number
CN117787440A
CN117787440A CN202311792112.0A CN202311792112A CN117787440A CN 117787440 A CN117787440 A CN 117787440A CN 202311792112 A CN202311792112 A CN 202311792112A CN 117787440 A CN117787440 A CN 117787440A
Authority
CN
China
Prior art keywords
stage
model
local
vehicle
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311792112.0A
Other languages
Chinese (zh)
Inventor
唐晓岚
梁煜婷
孙晓琦
郑昊男
陈文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capital Normal University
Original Assignee
Capital Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capital Normal University filed Critical Capital Normal University
Priority to CN202311792112.0A priority Critical patent/CN117787440A/en
Publication of CN117787440A publication Critical patent/CN117787440A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention provides a multi-stage federation learning method of an Internet of vehicles for non-independent co-distributed data, which comprises the steps of acquiring a global model of a vehicle participating in federation learning from a roadside unit as an initial local model; training the initial local model according to the local data of the vehicle to obtain a one-stage local model, and uploading the one-stage local model to a roadside unit so that the roadside unit aggregates the one-stage local model according to a FedAvg algorithm to obtain a one-stage global model; training a one-stage global model according to the local data to obtain a two-stage local model, and uploading the two-stage local model to a roadside unit so that the roadside unit aggregates the two-stage local model according to a federal weighting algorithm to obtain the two-stage global model; and (3) training a two-stage global model according to the local data iteration to obtain a three-stage local model. The method provided by the invention aims at the non-independent co-distributed data, and can effectively improve the model performance of federal learning on the premise of protecting the privacy of the vehicle.

Description

Internet of vehicles multi-stage federation learning method for non-independent co-distributed data
Technical Field
The invention belongs to the technical field of information processing.
Background
With the development of intelligent network connectivity, many vehicles are equipped with efficient communication and computing devices. At the same time, with the rapid increase in demand for edge computing, federal learning has attracted considerable attention in the industry. Unlike traditional centralized learning, edge learning does not need to upload a large amount of data to the cloud, but rather trains a machine learning model locally through mobile edge equipment, and then uploads the model to the cloud to complete global aggregation. The federal learning can effectively solve the problem of data island, separates the machine learning ability of each participant from the requirement of the cloud server for collecting all data, and realizes joint modeling of all data providers under the condition of not sharing the data, thereby effectively protecting the privacy security of user data.
In real life, the data collected by vehicles for training may also vary due to, among other reasons, the devices they collect data, which is Non-independent co-distributed (Non-IID) data. In the face of non-independent co-distributed data, the federal learning model aggregation process may require more communication rounds and iteration steps to achieve convergence, and the performance of the model may also be affected to a certain extent due to data imbalance, for example, some factors such as less data quantity of participating vehicles or poor data quality may cause the trained model result to be not ideal. Therefore, how to improve the convergence efficiency of the model in a limited communication round and a short time under the premise of ensuring the privacy security of the user when facing to Non-independent co-distributed Non-IID data is a problem to be solved.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent.
Therefore, the invention aims to provide a multi-stage federation learning method of the internet of vehicles for non-independent co-distributed data, which is used for improving the model performance of federation learning on the premise of protecting the privacy of vehicles aiming at the non-independent co-distributed data.
To achieve the above objective, an embodiment of a first aspect of the present invention provides a multi-stage federation learning method for internet of vehicles for data distributed in a non-independent and same manner, including:
s101, acquiring a global model of a vehicle participating in federal learning from a roadside unit as an initial local model;
s102, training the initial local model according to the local data of the vehicle to obtain a one-stage local model, and uploading the one-stage local model to the roadside unit so that the roadside unit aggregates the one-stage local model according to a FedAvg algorithm to obtain a one-stage global model;
s103: training the one-stage global model according to the local data to obtain a two-stage local model, and uploading the two-stage local model to the roadside unit so that the roadside unit aggregates the two-stage local model according to a federal weighting algorithm to obtain the two-stage global model;
s104: and iteratively training the two-stage global model according to the local data to obtain a three-stage local model.
In addition, the internet of vehicles multi-stage federation learning method for non-independent co-distributed data according to the above embodiment of the present invention may further have the following additional technical features:
further, in one embodiment of the present invention, before acquiring a global model of a vehicle participating in federal learning from a roadside unit, the method includes:
selecting a particular vehicle to participate in federal learning, i.e. vehicle v when the vehicle has a residence time in the communication range of the roadside unit at present greater than the total time to complete model download, local training and model upload k Is selected to participate in federal learning, otherwise, does not participate in the federal learning of the present round, and is expressed as follows:
wherein, for vehicles v k Stay time in roadside unit communication range, +.>T-th wheel vehicle v k Transmission time of download model, +.>T-th wheel vehicle v k Time of local training, ++>For the t-th wheel vehicle v k And uploading the transmission time of the model.
Further, in an embodiment of the present invention, the enabling the roadside unit to aggregate the one-stage local model according to a FedAvg algorithm is expressed as:
wherein omega t Is the global model parameter of the t-th round, omega t-1 Is the global model parameter of the t-1 th round, N is the number of vehicles participating in federal learning, eta is the local model learning rate,is the t-th vehicle v k Is a local model loss function of (1).
Further, in one embodiment of the present invention, after obtaining the one-stage global model, the method further includes:
and (3) taking the one-stage global model as a new one-stage local model, and iterating S102 until the one-stage global model converges.
Further, in an embodiment of the present invention, the causing the roadside unit to aggregate the two-stage local model according to a federal weighting algorithm is expressed as:
wherein,is the t-th vehicle v k Is the maximum local model accuracy in all participating vehicles,/->Is the t-th vehicle v k The data richness of the local training, DS is the integrated data richness of all vehicles,is the t-th vehicle v k The data amount of local training, DQ is the total data amount of all vehicles, the weights are 0.ltoreq.alpha, beta, gamma.ltoreq.1, and alpha+beta+gamma=1.
Further, in an embodiment of the present invention, the enabling the roadside unit to aggregate the two-stage local model according to a federal weighting algorithm further includes reducing vehicle communication overhead by introducing an upload and download transmission mechanism:
if the vehicle v k There is a large difference between the local model parameters of the t-th round and the global aggregation model of the t-1 th round, otherwise the local model parameters of the current round are not uploaded; wherein the difference of the two models is calculated by using the L2 norm, i.e
Wherein,is the t-th vehicle v k And ω t-1 Is the global model parameter of the t-1 th round whenWhen delta is super parameter, substituteThe difference between the surface models is large, the local model is required to be uploaded, otherwise, the local model is not required to be uploaded;
if the vehicle v k The local model is uploaded in the t-th round and has a higher weight in the global model aggregation, i.e Being a superparameter, then the roadside unit does not distribute the global aggregation model of the t-th round to the vehicle.
Further, in one embodiment of the present invention, after obtaining the two-stage global model, the method further includes:
and taking the two-stage global model as a new two-stage local model, and iterating S103 until the two-stage global model converges.
Further, in one embodiment of the present invention, after obtaining the three-stage local model, the method further includes:
when the performance of the three-stage local model is reduced, local data is used for fine tuning and optimization, and the local model with the best performance is obtained.
In order to achieve the above objective, an embodiment of a second aspect of the present invention provides a multi-stage federation learning device for internet of vehicles for data distributed in a non-independent and same manner, which includes the following modules:
the acquisition module is used for acquiring a global model of a vehicle participating in federal learning from a roadside unit as an initial local model;
the federal average multiparty calculation module is used for training the initial local model according to the local data of the vehicle to obtain a one-stage local model, and uploading the one-stage local model to the roadside unit so that the roadside unit aggregates the one-stage local model according to a FedAvg algorithm to obtain a one-stage global model;
the federal weighting multiparty calculation module is used for training the one-stage global model according to the local data to obtain a two-stage local model, and uploading the two-stage local model to the roadside unit so that the roadside unit aggregates the two-stage local model according to a federal weighting algorithm to obtain the two-stage global model;
and the personalized calculation module is used for training the two-stage global model according to the local data to obtain a three-stage local model.
To achieve the above object, an embodiment of the present invention provides a computer device, which is characterized by comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor implements a multi-stage federal learning method for internet of vehicles facing to data with non-independent and same distribution as described above when executing the computer program.
To achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a multi-stage federal learning method for internet of vehicles for data oriented to non-independent co-distribution as described above.
The multi-stage federation learning method of the Internet of vehicles facing the Non-independent and same-distributed data, provided by the embodiment of the invention, overcomes the defect that the federation learning is difficult to converge when facing the Non-IID data. By the federal learning mechanism of stage 3: the federal average multiparty calculation stage, the federal weighted multiparty calculation stage and the personalized calculation stage realize the rapid convergence of the vehicle local model, and achieve the result of higher accuracy of the vehicle local model. And by combining with a transmission control strategy, vehicles participating in federal learning are selected, so that communication resources are fully utilized, and calculation cost and communication cost are reduced.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a schematic flow diagram of a multi-stage federation learning method of internet of vehicles for non-independent co-distributed data according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a federal learning scenario in the internet of vehicles according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a multi-stage federal learning system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a multi-stage federation learning device for internet of vehicles for non-independent and co-distributed data according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
The multistage federation learning method of the Internet of vehicles facing to the non-independent co-distributed data is described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a multi-stage federation learning method of internet of vehicles for non-independent co-distributed data according to an embodiment of the present invention.
As shown in fig. 1, the internet of vehicles multi-stage federation learning method for non-independent co-distributed data includes the following steps:
s101: acquiring a global model of a vehicle participating in federal learning from a roadside unit as an initial local model;
s102: training the initial local model according to the local data of the vehicle to obtain a one-stage local model, and uploading the one-stage local model to a roadside unit so that the roadside unit aggregates the one-stage local model according to a FedAvg algorithm to obtain a one-stage global model;
s103: training a one-stage global model according to the local data to obtain a two-stage local model, and uploading the two-stage local model to a roadside unit so that the roadside unit aggregates the two-stage local model according to a federal weighting algorithm to obtain the two-stage global model;
s104: and (3) training a two-stage global model according to the local data iteration to obtain a three-stage local model.
As shown in fig. 2, the vehicle performs federal learning calculation in communication with the RSU of the roadside unit, the vehicle trains the model locally, uploads its local model parameters after each round is finished, the roadside unit performs global aggregation, and a new model after aggregation is issued to the participating vehicles. Considering that the vehicle is in the process of moving rapidly, in order to avoid the resource waste caused by the fact that the vehicle leaves the communication range of the roadside unit when one-round federal learning is not completed, the roadside unit selects a specific vehicle to participate in federal learning, namely, when the stay time of the vehicle in the communication range of the roadside unit where the vehicle is currently positioned is longer than the total time for completing model downloading, local training and model uploading, the vehicle v k And selecting to participate in federal learning, otherwise, not participating in the federal learning of the round.
Further, in one embodiment of the present invention, before acquiring a global model of a vehicle participating in federal learning from a roadside unit, the method includes:
selecting a particular vehicle to participate in federal learning, i.e. vehicle v when the vehicle has a residence time in the communication range of the roadside unit at present greater than the total time to complete model download, local training and model upload k Is selected to participate in federal learning, otherwise, does not participate in the federal learning of the present round, and is expressed as follows:
wherein, for vehicles v k Stay time in roadside unit communication range, +.>T-th wheel vehicle v k Transmission time of download model, +.>T-th wheel vehicle v k Time of local training, ++>For the t-th wheel vehicle v k And uploading the transmission time of the model.
Based on this, it is proposed that the multi-stage federal learning mechanism FedWO consists of three phases, namely a phase 1 federal average multiparty calculation phase, a phase 2 federal weighted multiparty calculation phase, and a phase 3 personalized calculation phase, and how the respective phases work will be explained in detail next. The schematic diagram is shown in fig. 3.
At the initial stage of data collection, federal learning requires a large number of participants to contribute data when the roadside units have not yet obtained a relatively stable global model. To achieve this goal, in the first stage, the FedAvg algorithm is chosen to aggregate the global model, with all vehicles' local models having the same weight.
Further, in one embodiment of the invention, the roadside units are polymerized according to the FedAvg algorithm into a one-stage local model, expressed as:
wherein omega t Is the global model parameter of the t-th round, omega t-1 Is the global model parameter of the t-1 th round, N is the number of vehicles participating in federal learning, eta is the local model learning rate,is the t-th vehicle v k Is a local model loss function of (1).
Further, in one embodiment of the present invention, after obtaining the one-stage global model, the method further includes:
and (3) taking the one-stage global model as a new one-stage local model, and iterating S102 until the one-stage global model converges.
The model performance at the server side will be considered as an important indicator for entering the second phase of the study. When the model precision of the server tends to a stable value, the representative can enter the 2 nd stage.
Considering the Non-independent co-distribution (Non-IID) characteristics of vehicle data, if the FedAvg algorithm is continuously used, the global model is difficult to achieve the goal of optimizing training accuracy. To solve this problem we propose to assign weights for different vehicles when the global model is aggregated. The determination of the weights is affected by three variables: (1) accuracy of model: vehicles with higher model accuracy will be assigned higher weight-to-weight ratios in the global model aggregation. (2) richness of data set: vehicles with richer datasets will be weighted more heavily in the global model. (3) size of dataset: the greater the amount of data of the participating vehicles, the correspondingly greater the weight in the global model. The three variables are combined, and the generalization capability of the global model is optimized in a mode that different weights are distributed to the local models of different vehicles in the global aggregation process.
Further, in one embodiment of the invention, the roadside units are caused to aggregate a two-stage local model according to a federal weighting algorithm, expressed as:
wherein,is the t-th vehicle v k Is the maximum local model accuracy in all participating vehicles,/->Is the t-th vehicle v k The data richness of the local training, DS is the integrated data richness of all vehicles,is the t-th vehicle v k The data volume of the local training, DQ, is the total of all vehiclesData amount, weight 0 is less than or equal to alpha, beta, gamma is less than or equal to 1, and alpha+beta+gamma=1.
In the federal weighted multiparty computing phase, in order to optimize the computing and communication resource utilization, a transmission control strategy is proposed that reduces transmission overhead by selecting vehicles that participate in federal learning. Selecting participants for federal learning requires evaluation from two dimensions: uploading a local model of a vehicle; and secondly, issuing a model of the roadside unit.
Further, in one embodiment of the present invention, the method for aggregating two-stage local model by roadside units according to federal weighting algorithm further includes reducing vehicle communication overhead by introducing an upload and download transmission mechanism:
if the vehicle v k There is a large difference between the local model parameters of the t-th round and the global aggregation model of the t-1 th round, otherwise the local model parameters of the current round are not uploaded; wherein the difference of the two models is calculated by using the L2 norm, i.e
Wherein,is the t-th vehicle v k And ω t-1 Is the global model parameter of the t-1 th round whenWhen delta is a super parameter, the difference between the representative models is larger, the local model needs to be uploaded, otherwise, the local model does not need to be uploaded;
if the vehicle v k The local model is uploaded in the t-th round and has a higher weight in the global model aggregation, i.e Is prepared from radix Ginseng RubraAnd then the roadside unit does not distribute the global aggregate model of the t-th round to the vehicle.
Further, in one embodiment of the present invention, after obtaining the two-stage global model, the method further includes:
and taking the two-stage global model as a new two-stage local model, and iterating S103 until the two-stage global model converges.
The participant selection mechanism is used for guiding the behaviors of vehicles and roadside units in the federal weighted multiparty calculation process, reducing unnecessary data transmission and improving the resource utilization rate. Meanwhile, to ensure the consistency of the local model and the global model, two rules are set: 1) In each iteration, each vehicle participates in at least one of uploading or downloading model parameters. Vehicles are not allowed to interact with the server at all in the same round. 2) If in the t-th wheel, the roadside unit does not face the vehicle v k The global model is distributed and then at round t+1 the vehicle must actively participate in the uploading of local model parameters. The rule ensures stable running of the federal weighted multiparty calculation process, and improves the transmission efficiency between the vehicle and the roadside units.
Traditional federal computing methods, such as federal average computation and federal weighted computation, may face challenges in the face of Non-independent co-distribution (Non-IID) data characteristics. In particular, after these computing stages, a generic model is obtained through global aggregation, which may not capture unique data characteristics between a particular data source (e.g., a vehicle) and other data sources. Such a loss may result in the inability to further optimize a particular local model, and sometimes may even result in a decrease in model performance. To solve this problem, it is proposed to fine-tune the generic model after it is obtained, using local data features, to better adapt to the local data distribution. Such local property-based tuning can effectively enhance the performance of the model, especially when processing local data that differs from the overall data distribution. We refer to this phase of fine tuning based on local data characteristics as the "personalisation" phase. The goal of this stage is to ensure that the model is able to accurately capture and utilize the unique information of each data source, thereby achieving a model that achieves optimal performance on the local data.
Further, in one embodiment of the present invention, after obtaining the three-stage local model, the method further includes:
when the performance of the three-stage local model is reduced, local data is used for fine tuning and optimization, and the local model with the best performance is obtained.
Such a strategy may ensure that the model is better adapted to the specific data distribution of the vehicle, thereby achieving a higher model accuracy. In this "personalisation" phase, the main objective is to maximize the accuracy of the local model. This requires that we focus on the characteristics of the local data, ensuring that the model can exploit these characteristics to make efficient predictions or classifications, thus achieving optimal performance in a particular application scenario.
The multi-stage federation learning method of the Internet of vehicles facing the Non-independent and same-distributed data, provided by the embodiment of the invention, overcomes the defect that the federation learning is difficult to converge when facing the Non-IID data. By the federal learning mechanism of stage 3: the federal average multiparty calculation stage, the federal weighted multiparty calculation stage and the personalized calculation stage realize the rapid convergence of the vehicle local model, and achieve the result of higher accuracy of the vehicle local model. And by combining with a transmission control strategy, vehicles participating in federal learning are selected, so that communication resources are fully utilized, and calculation cost and communication cost are reduced.
In order to realize the embodiment, the invention further provides a multi-stage federation learning device of the Internet of vehicles for data in a non-independent and same distribution mode.
Fig. 4 is a schematic structural diagram of a multi-stage federation learning device of the internet of vehicles for non-independent co-distributed data according to an embodiment of the present invention.
As shown in fig. 4, the internet of vehicles multi-stage federation learning device for non-independent co-distributed data includes: the system comprises an acquisition module 100, a federal average multi-party calculation module 200, a federal weighted multi-party calculation module 300, a personalized calculation module 400, wherein,
the acquisition module is used for acquiring a global model of a vehicle participating in federal learning from a roadside unit as an initial local model;
the federal average multiparty calculation module is used for training the initial local model according to the local data of the vehicle to obtain a one-stage local model, and uploading the one-stage local model to the roadside unit so that the roadside unit aggregates the one-stage local model according to the FedAvg algorithm to obtain a one-stage global model;
the federal weighting multiparty calculation module is used for training a one-stage global model according to the local data to obtain a two-stage local model, and uploading the two-stage local model to the roadside unit so that the roadside unit aggregates the two-stage local model according to the federal weighting algorithm to obtain the two-stage global model;
and the personalized calculation module is used for training the two-stage global model according to the local data to obtain a three-stage local model.
To achieve the above object, an embodiment of the present invention provides a computer device, which is characterized by comprising a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the internet of vehicles multi-stage federal learning method for data oriented to non-independent co-distribution as described above when executing the computer program.
To achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the internet of vehicles multi-stage federal learning method for non-independent co-distributed data as described above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (10)

1. The Internet of vehicles multi-stage federation learning method for the non-independent co-distributed data is characterized by comprising the following steps of:
s101, acquiring a global model of a vehicle participating in federal learning from a roadside unit as an initial local model;
s102, training the initial local model according to the local data of the vehicle to obtain a one-stage local model, and uploading the one-stage local model to the roadside unit so that the roadside unit aggregates the one-stage local model according to a FedAvg algorithm to obtain a one-stage global model;
s103, training the one-stage global model according to the local data to obtain a two-stage local model, and uploading the two-stage local model to the roadside unit so that the roadside unit aggregates the two-stage local model according to a federal weighting algorithm to obtain the two-stage global model;
s104, iteratively training the two-stage global model according to the local data to obtain a three-stage local model.
2. The method of claim 1, comprising, prior to obtaining a global model of a vehicle participating in federal learning from a roadside unit:
selecting a particular vehicle to participate in federal learning, i.e. vehicle v when the vehicle has a residence time in the communication range of the roadside unit at present greater than the total time to complete model download, local training and model upload k Is selected to participate in federal learning, otherwise, does not participate in the federal learning of the present round, and is expressed as follows:
wherein,for vehicles v k Stay time in roadside unit communication range, +.>T-th wheel vehicle v k Transmission time of download model, +.>T-th wheel vehicle v k Time of local training, ++>For the t-th wheel vehicle v k And uploading the transmission time of the model.
3. The method of claim 1, wherein the causing the roadside unit to aggregate the one-stage local model according to a FedAvg algorithm is represented as:
wherein omega t Is the global model parameter of the t-th round, omega t-1 Is the global model parameter of the t-1 th round, N is the number of vehicles participating in federal learning, eta is the local model learning rate,is the t-th vehicle v k Is a local model loss function of (1).
4. The method of claim 1, further comprising, after obtaining the one-phase global model:
and (3) taking the one-stage global model as a new one-stage local model, and iterating S102 until the one-stage global model converges.
5. The method of claim 1, wherein the causing the roadside unit to aggregate the two-stage local model according to a federal weighting algorithm is represented as:
wherein,is the t-th vehicle v k Is the maximum local model accuracy in all participating vehicles,/->Is the t-th vehicle v k Data richness of local training, DS is data richness of total vehicle combination, ++>Is the t-th vehicle v k The data volume of local training, DQ, is the total vehicleWeight 0 is equal to or less than alpha, beta, gamma is equal to or less than 1 and alpha + beta + gamma = 1.
6. The method of claim 1 or 4, wherein the causing the roadside units to aggregate the two-phase local model according to a federal weighting algorithm further comprises reducing vehicle communication overhead by introducing an upload and download transmission mechanism:
if the vehicle v k There is a large difference between the local model parameters of the t-th round and the global aggregation model of the t-1 th round, otherwise the local model parameters of the current round are not uploaded; wherein the difference of the two models is calculated by using the L2 norm, i.e
Wherein,is the t-th vehicle v k And ω t-1 Is the global model parameter of the t-1 th round whenWhen delta is a super parameter, the difference between the representative models is larger, the local model needs to be uploaded, otherwise, the local model does not need to be uploaded;
if the vehicle v k The local model is uploaded in the t-th round and has a higher weight in the global model aggregation, i.eBeing a superparameter, then the roadside unit does not distribute the global aggregation model of the t-th round to the vehicle.
7. The method of claim 1, further comprising, after obtaining the two-stage global model:
and taking the two-stage global model as a new two-stage local model, and iterating S103 until the two-stage global model converges.
8. The method of claim 1, further comprising, after obtaining the three-phase local model:
when the performance of the three-stage local model is reduced, local data is used for fine tuning and optimization, and the local model with the best performance is obtained.
9. The Internet of vehicles multi-stage federation learning device for the non-independent co-distributed data is characterized by comprising the following modules:
the acquisition module is used for acquiring a global model of a vehicle participating in federal learning from a roadside unit as an initial local model;
the federal average multiparty calculation module is used for training the initial local model according to the local data of the vehicle to obtain a one-stage local model, and uploading the one-stage local model to the roadside unit so that the roadside unit aggregates the one-stage local model according to a FedAvg algorithm to obtain a one-stage global model;
the federal weighting multiparty calculation module is used for training the one-stage global model according to the local data to obtain a two-stage local model, and uploading the two-stage local model to the roadside unit so that the roadside unit aggregates the two-stage local model according to a federal weighting algorithm to obtain the two-stage global model;
and the personalized calculation module is used for training the two-stage global model according to the local data to obtain a three-stage local model.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the non-independent co-distributed data oriented internet of vehicles multi-stage federation learning method according to any one of claims 1-8 when the computer program is executed.
CN202311792112.0A 2023-12-22 2023-12-22 Internet of vehicles multi-stage federation learning method for non-independent co-distributed data Pending CN117787440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311792112.0A CN117787440A (en) 2023-12-22 2023-12-22 Internet of vehicles multi-stage federation learning method for non-independent co-distributed data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311792112.0A CN117787440A (en) 2023-12-22 2023-12-22 Internet of vehicles multi-stage federation learning method for non-independent co-distributed data

Publications (1)

Publication Number Publication Date
CN117787440A true CN117787440A (en) 2024-03-29

Family

ID=90386418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311792112.0A Pending CN117787440A (en) 2023-12-22 2023-12-22 Internet of vehicles multi-stage federation learning method for non-independent co-distributed data

Country Status (1)

Country Link
CN (1) CN117787440A (en)

Similar Documents

Publication Publication Date Title
CN112101532B (en) Self-adaptive multi-model driving equipment fault diagnosis method based on edge cloud cooperation
CN110377353A (en) Calculating task uninstalling system and method
CN110351754A (en) Industry internet machinery equipment user data based on Q-learning calculates unloading decision-making technique
CN114745383A (en) Mobile edge calculation assisted multilayer federal learning method
CN110234155A (en) A kind of super-intensive network insertion selection method based on improved TOPSIS
CN115688913A (en) Cloud-side collaborative personalized federal learning method, system, equipment and medium
CN110535936A (en) A kind of energy efficient mist computation migration method based on deep learning
CN109391511B (en) Intelligent communication resource allocation strategy based on expandable training network
CN111625998A (en) Method for optimizing structure of laminated solar cell
CN116187483A (en) Model training method, device, apparatus, medium and program product
CN114861906A (en) Lightweight multi-exit-point model establishing method based on neural architecture search
Sadeghi-Mobarakeh et al. Strategic selection of capacity and mileage bids in California ISO performance-based regulation market
CN110929885A (en) Smart campus-oriented distributed machine learning model parameter aggregation method
CN103577899A (en) Service composition method based on reliability prediction combined with QoS
CN117787440A (en) Internet of vehicles multi-stage federation learning method for non-independent co-distributed data
CN114401192B (en) Multi-SDN controller cooperative training method
CN115392481A (en) Federal learning efficient communication method based on real-time response time balancing
CN106484879B (en) A kind of polymerization of the Map end data based on MapReduce
CN106126343A (en) MapReduce data balancing method based on increment type partitioning strategies
CN117221122B (en) Asynchronous layered joint learning training method based on bandwidth pre-allocation
CN110011929A (en) A kind of Distributed Predictive Control method improving network congestion phenomenon
CN103458315A (en) P2P streaming media copying method based on popularity
Yang et al. Communication-efficient Federated Learning with Cooperative Filter Selection
CN114881229B (en) Personalized collaborative learning method and device based on parameter gradual freezing
Gao et al. Deep Reinforcement Learning Based Rendering Service Placement for Cloud Gaming in Mobile Edge Computing Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination