CN115623445A - Efficient communication method based on federal learning in Internet of vehicles environment - Google Patents

Efficient communication method based on federal learning in Internet of vehicles environment Download PDF

Info

Publication number
CN115623445A
CN115623445A CN202211401451.7A CN202211401451A CN115623445A CN 115623445 A CN115623445 A CN 115623445A CN 202211401451 A CN202211401451 A CN 202211401451A CN 115623445 A CN115623445 A CN 115623445A
Authority
CN
China
Prior art keywords
vehicle
parameters
base station
training
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211401451.7A
Other languages
Chinese (zh)
Inventor
孙恩昌
张卉
何若兰
李梦思
张冬英
司鹏搏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202211401451.7A priority Critical patent/CN115623445A/en
Publication of CN115623445A publication Critical patent/CN115623445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/40Business processes related to the transportation industry
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a high-efficiency communication method based on federal learning in a car networking environment, which comprises the following steps: s1, vehicle user equipment performs model training by adopting local data training; s2, judging whether the vehicle user information collected in the step S1 is used for training local model parameters, transmitting the local model training parameters to the base station and receiving the delay of the global model parameters and energy consumption for training and transmitting the parameters to the base station meet a preset threshold value or not, if so, executing the step S3, otherwise, returning to the step S1; s3, distributing wireless resource blocks to vehicle user equipment by using a shortest path optimization algorithm according to the vehicle users; s4, quantizing and coding the parameters by the vehicle equipment distributed to the resource blocks; s5, the base station decodes and carries out weighted aggregation according to the received model parameters, and broadcasts the global model parameters after the aggregation update to each vehicle user equipment; and S6, adjusting the learning rate by the vehicle user equipment. The invention utilizes the random gradient quantization algorithm for optimization and has effectiveness in uplink communication.

Description

Efficient communication method based on federal learning in Internet of vehicles environment
Technical Field
The invention relates to the technical field of communication, in particular to a high-efficiency communication method based on Federal Learning (FL) in an Internet of vehicles environment.
Background
In recent years, with the rapid development of technologies such as mobile communication and the promotion of diversified service demands of vehicle users, internet of things (IoV) and intelligent networked automobiles play more and more important roles in intelligent transportation systems. Therefore, constructing an efficient, safe and sustainable traffic environment has received a great deal of attention from all communities.
Many Machine Learning (ML) algorithms have been developed for processing and analyzing data. Conventional centralized ML data processing methods typically require vehicle devices to transmit data over a channel to a Base station (Base Stations BS) for centralized model training. While centralized model training has many advantages and applications, the data generated by IoV grows exponentially and the BS may not have sufficient storage and computing resources to store and process such a large amount of data, limiting the performance of the training model. More importantly, the original data generated or collected by the vehicle device usually contains personal information of a vehicle user, and the security and privacy of the data in the interaction process of the vehicle device and the base station are usually difficult to be guaranteed, so that the risk of privacy disclosure exists. Therefore, it is urgent to find a solution that can ensure the privacy of the user and process data efficiently. According to the invention, FL is introduced into IoV, so that the privacy of vehicle users is protected while data is efficiently processed.
FL originates from distributed ML, which is a distributed training framework that can achieve privacy protection. It allows multiple vehicle devices to jointly train a global ML model. In the learning process, the vehicle equipment and the BS only exchange local model parameters, and data are stored locally, so that the risk of data privacy disclosure of the vehicle-mounted equipment is greatly reduced. Thus, the FL paradigm is introduced herein to IoV.
Further, when the amount of data transmitted by the vehicular apparatus is large, the communication delay of the uplink may be greatly increased. Aiming at the problem, the invention provides a method for reducing the communication delay of an uplink by adopting a quantization random gradient descent algorithm based on error feedback to carry out quantization processing on local model parameters and reducing the bit size of transmission parameters.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a design of an FL-based efficient communication method in an Internet of vehicles environment.
A high-efficiency communication method based on federal learning in a car networking environment comprises the following steps:
s1, a base station carries out an initialization process, and vehicle user equipment carries out model training by adopting local data training;
s2, the base station judges whether the vehicle user information collected in the step S1 is used for training local model parameters, transmitting the local model training parameters to the base station and receiving the delay of the global model parameters and the energy consumption for training and transmitting the parameters to the base station meet a preset threshold value or not, if yes, the step S3 is executed, and if not, the step S1 is returned;
s3, the base station allocates wireless resource blocks to the vehicle user equipment which meets the requirements currently by using a shortest path optimization algorithm for the vehicle users which meet the conditions of the preset threshold value in the S2 according to the vehicle users;
s4, quantizing and coding the parameters by the vehicle equipment distributed to the resource block, and then transmitting the parameters to the base station;
s5, the base station decodes and weights and aggregates the received model parameters according to the training effect of the model, and broadcasts the overall model parameters updated by aggregation to each vehicle user equipment;
and S6, adjusting the learning rate by the vehicle user equipment, and returning to the step S1.
The FL-based communication model for a vehicle user in an IoV scenario is shown in fig. 1. Assume a vehicle segment with radius r =500m, which has a BS (BS) located at the center of the area and N vehicle users randomly distributed in the area, which are represented by the set U = {1,2, \ 8230;, N }. In the working process, the BS and the vehicle user equipment are matched with each other, each vehicle user equipment trains a model according to local data, after the training is finished, local model training parameters are uploaded to the BS to be subjected to aggregation updating to obtain global model parameters, then the BS broadcasts the global model parameters to each participating vehicle user equipment, and the process is circulated until the given iteration number is reached.
The model considered by the present invention has two communication phases, local model training parameters for local devices are uploaded to the BS (uplink communication) and the BS broadcasts global model parameters to individual vehicle users (downlink communication), respectively. For uplink communication, we consider using a time division multiple access protocol for parameter transmission.
The BS collects basic information of the work of the vehicle user, which refers to basic parameters of the work of the vehicle user equipment, including the effective capacitance coefficient c of the vehicle user equipment n n CPU frequency f of user device n n The number of CPU cycles sigma required for a user equipment n to process a parameter n The distance between the vehicle device n and the base station and the mobility of each device are modeled by a random walk model. At each time interval t, the vehicle device either stays at the current position or moves in four directions, respectively: 1) Forward; 2) Backwards; 3) To the left; 4) To the right. The probability that each vehicular device n moves at each time slot t is: p is a radical of nt =[p nt,0 ,p nt,1 ,p nt,2 ,p nt,3 ,p nt,4 ]Wherein p is nt,0 Representing the probability that the vehicle device n is stationary at the time slot t; p is a radical of nt,1 Representing the probability, p, that the vehicle device n moves forward in time slot t nt,2 Representing the probability that the vehicle device n moves backward at the time slot t; p is a radical of nt,3 Represents p n Representing the probability that the vehicle device n moves to the left in the time slot t; p is a radical of formula nt,4 Vehicle display deviceAnd n is the probability of moving to the right in the time slot t. The position of each vehicle device at time slot t can be expressed as: phi is a unit of nt =[φ nt,1 ,φ nt,2 ],φn t,1 An abscissa representing a relative position of the vehicular apparatus to the base station at the time slot t in n; phi is a unit of nt,1 A vertical coordinate representing the relative position of the vehicle device to the base station at the time slot t in n; . Assume that each device moves at a rate v n And the duration of each time slot is Δ t, the position of the vehicle device n at the time t +1 can be represented as:
Figure BDA0003935211810000021
the distance between the vehicular apparatus n and the base station can be expressed as:
Figure BDA0003935211810000022
the parametric transmission rates for the vehicle user equipments n and BS can be derived as:
Figure BDA0003935211810000023
Figure BDA0003935211810000024
where R denotes the total number of resource blocks in the uplink, B u And B d Representing the transmission bandwidth of the uplink and downlink, respectively, N 0 Representing the noise power spectral density, p n And p B Representing the power of the vehicle user equipment n and BS transmission model parameters respectively,
Figure BDA0003935211810000025
represents the channel gain, Σ, between a vehicle user n and BS n′∈N′ P n′ h n′ Sum Σ B≠B′ P B′ h B′ Representing the interference of the vehicle users and the BS which do not participate in the FL algorithm to the vehicle users and the BS which participate in the FL algorithm;
the delay for a vehicle user to train the local model, transmit the local model training parameters to the base station, and receive the global model parameters is represented as:
Figure BDA0003935211810000026
wherein
Figure BDA0003935211810000027
And
Figure BDA0003935211810000028
respectively represents the local model training delay, the uplink parameter transmission delay and the downlink receiving parameter delay of the vehicle user in each iteration process, so the total delay is expressed as
Figure BDA0003935211810000029
Omega represents the local model training parameter of the vehicle user, H (omega) represents the size of the local model training parameter transmitted by the vehicle user to the base station, g represents the global model parameter, H (g) represents the size of the global model parameter broadcasted by the base station to each vehicle user, K n Represents the size of the vehicle user n local training data set;
the energy consumption of the vehicle user for training the local model and transmitting the local model training parameters to the base station in each iteration process is respectively represented as:
Figure BDA0003935211810000031
since the base station is continuously powered, the present invention does not consider the energy consumption of the base station, so the total energy consumption of each iteration process is expressed as
Figure BDA0003935211810000032
The invention adopts a shortest path optimization algorithm to search the optimal matching between the vehicle user and the wireless resource block according to the transmission parameter rate of the vehicle user equipment. The invention takes the uplink transmission rate of the vehicle equipment as a cost matrix, and when the transmission rate of the parameters is maximum, the communication delay in the interaction process of the vehicle equipment and the base station is minimum, which is expressed as follows:
Figure BDA0003935211810000033
r nm is a binary matrix, r nm =1, meaning that the vehicle user n corresponds to the radio resource block vector m, otherwise 0;
the shortest path optimization algorithm selected by the invention is essentially used for processing the problem of minimum cost, and the matching problem related in the invention is the problem of maximum cost, so that the problem of maximum cost is firstly converted into the problem of minimum cost for solving when the problem is solved, as shown in the following:
let M = max C first of all,
Figure BDA0003935211810000034
the problem then turns into:
Figure BDA0003935211810000035
based on the above analysis, the FL loss function in the Internet of vehicles environment is represented as:
Figure BDA0003935211810000036
Figure BDA0003935211810000037
Figure BDA0003935211810000038
Figure BDA0003935211810000039
Figure BDA00039352118100000310
n∈N r n,t ≤R (5)
wherein, (4) indicates that each resource block vector can only be allocated to one user, and (5) indicates that the number of vehicle user equipments is less than or equal to the number of wireless resource block vectors.
The invention provides a method for reducing the parameter bit size transmitted to a BS by a vehicle user by adopting a historical quantization random gradient descent algorithm based on error feedback so as to further reduce the communication delay of vehicle equipment in the transmission process. Quantization, an important branch of model compression, is efficient for reducing the bit size of parameter transmission, and the specific workflow is set forth as follows:
firstly, the invention considers not only the training parameters of the current model but also the error accumulation in the previous learning process when quantizing the parameters, and the quantization error in each iteration process is expressed as:
Figure BDA00039352118100000311
the error accumulation is expressed as:
Figure BDA00039352118100000312
where β is expressed as a time decay factor (0 ≦ β ≦ 1) that prevents excessive accumulation of errors.
As the iterative process progresses, the error is updated in real time as:
Figure BDA00039352118100000313
with the addition of the error feedback mechanism, the parametric quantization function can be expressed as:
Figure BDA00039352118100000314
Figure BDA0003935211810000041
wherein
Figure BDA0003935211810000042
Is an independent random variable, expressed as:
Figure BDA0003935211810000043
where s corresponds to the number of quantization levels, l ∈ [0, s) is such that
Figure BDA0003935211810000044
An integer of (d);
the quantized parameters are expressed as:
Figure BDA0003935211810000045
the vehicle user equipment participating in the iterative process sends the local model parameters to the base station, and the base station decodes the received gradient parameters of the vehicle user n
Figure BDA0003935211810000046
And calculate
Figure BDA0003935211810000047
And
Figure BDA0003935211810000048
the invention adopts
Figure BDA0003935211810000049
And measuring whether the vehicle equipment adopts the historical information to carry out a parameter aggregation process of the vehicle equipment, wherein the parameter aggregation process is represented as follows:
Figure BDA00039352118100000410
when in use
Figure BDA00039352118100000411
And the time base station aggregates the training parameters of the vehicle equipment in the t-th iteration process, otherwise, the aggregated historical parameters are expressed as:
Figure BDA00039352118100000412
wherein N' represents a vehicle device transmitting parameters to a base station through a wireless channel;
then the device adopts a quantitative optimizer to eliminate instability in the training process so as to improve the training efficiency, and the adjustment basis of the learning rate of the vehicle device n is as follows:
Figure BDA00039352118100000413
Figure BDA00039352118100000414
where p is used to adjust
Figure BDA00039352118100000415
To adapt to the hyper-parameters of the learning rate.
The base station updates the global model parameters by using the weighted and aggregated parameters, and then broadcasts the updated global model parameters to the vehicle users for updating the next iteration round, which is expressed as:
Figure BDA00039352118100000416
where η represents the learning rate. x is a radical of a fluorine atom nj K-th data sample, y, representing a vehicle device n nj Denotes x nj To output (d).
Figure BDA00039352118100000417
Is relative to
Figure BDA00039352118100000418
T represents the current iteration process, i.e. the t-th interaction of the base station with the vehicle device. i represents the number of training times of the vehicle apparatus locally.
Figure BDA00039352118100000419
And representing model parameters obtained by the i-th local training of the vehicle equipment n in the t + 1-th iteration process.
Figure BDA00039352118100000420
And representing model parameters obtained by the i-1 st local training of the vehicle equipment n in the t +1 st iteration process. Eta t+1 Represents the learning rate of the vehicle equipment for local training during the t +1 th iteration.
Figure BDA00039352118100000421
Representing the total number of samples for the vehicle device n to perform local training during the t +1 th iteration. j represents the jth sample in the vehicle device's collected data set. K n Representing the number of data samples for which the vehicle device n is trained.
A flow chart of an improved FL-based efficient communication algorithm in combination with the IoV scenario of the present invention is shown in table 1.
TABLE 1 efficient communication method based on FL in Internet of vehicles environment
Figure BDA0003935211810000051
Drawings
FIG. 1 is a model of communication between FL-based vehicle users and base stations in a vehicle networking environment.
and a is a vehicle network scene graph based on federal learning. b is a communication process diagram.
FIG. 2 shows that different parameter quantization algorithms affect the global convergence accuracy of FL.
FIG. 3 different parameter quantization algorithms affect the global loss function of the lower FL.
Fig. 4 global convergence accuracy of FL at different quantization bits.
Fig. 5 global loss function for FL for different quantization bits.
Detailed Description
The invention carries out design simulation in Matlab2018 a. Assume that there is a vehicle section with a radius r =500m, with a BS in the center of the section and 15 randomly distributed vehicle users.
Simulation parameter settings are shown in Table 2
Table 2 simulation parameter settings
Figure BDA0003935211810000052
Figure BDA0003935211810000061
In order to verify the method, the invention selects the training precision with unquantized parameters and the parameters quantized to 2bit, 4bit and 8bit as comparison baselines to prove that the effectiveness of optimizing the uplink communication by utilizing the random gradient quantization algorithm is improved.
FIG. 2 is a global penalty function for FL when quantizing local parameters to 8bits using EF-HQSSGD, EF-QSGD, and RRS. It can be seen from the figure that the global penalty function for FL decreases as the number of iterations increases. But the global penalty function for FL increases slightly after local parameter quantization. However, the EF-HQSGD proposed by the present invention is still superior to EF-QSGD, QSGD and RRS. This is because the EF-HQSGD proposed by the present invention adds error feedback, quantizer optimization and historical information strategies. These strategies can effectively improve training efficiency and reduce the negative impact of quantization on FL performance.
FIG. 3 is the global convergence accuracy of FL when local parameters are quantized to 8bits using EF-HQSSGD, EF-QSGD, QSGD and RRS. It can be seen from the figure that as the number of iterations increases, the convergence accuracy of FL first increases and then remains stable (converges). Meanwhile, EF-HQSSGD enables FL to converge faster and with higher accuracy than EF-QSGD, and RRS.
Fig. 4 and 5 show the global loss function and convergence accuracy of FL when EF-HQSGD quantizes local parameters to 8bits, 4bits, and 2bits, respectively. As can be seen from fig. 4 and 5, the global loss and convergence accuracy performance of FL is reduced after local parameter quantization compared to non-quantized local parameters. However, when the local parameters are quantized to 8bits, the performance loss of FL in terms of global loss and convergence accuracy is small.

Claims (7)

1. A high-efficiency communication method based on federal learning in a car networking environment is characterized by comprising the following steps:
s1, a base station carries out an initialization process, and vehicle user equipment carries out model training by adopting local data training;
s2, the base station judges whether the delay of collecting vehicle user equipment information for training local model parameters, transmitting the local model training parameters to the base station and receiving global model parameters and the energy consumption for training and transmitting the parameters to the base station in the step S1 meet a preset threshold value or not, if yes, the step S3 is executed, and if not, the step S1 is returned to;
s3, the base station allocates wireless resource blocks to the vehicle user equipment meeting the requirements currently by using a shortest path optimization algorithm for the vehicle users meeting the conditions of the preset threshold value in the S2 according to the vehicle users;
s4, quantizing and coding the parameters by the vehicle equipment distributed to the resource block, and then transmitting the parameters to the base station;
s5, the base station decodes and weights and aggregates the received model parameters according to the training effect of the model, and broadcasts the overall model parameters updated by aggregation to each vehicle user equipment;
and S6, adjusting the learning rate by the vehicle user equipment, and returning to the step S1.
2. The method for optimizing federally-learned efficient communications in a car networking environment as claimed in claim 1, wherein the step S1 employs a stochastic gradient descent algorithm for local training:
Figure FDA0003935211800000011
where η represents a learning rate of the vehicle apparatus; x is the number of nj J-th data sample, y, representing a vehicle device n nj Represents x nj An output of (d);
Figure FDA0003935211800000012
is relative to
Figure FDA0003935211800000013
The model parameters of (1); t represents the current iterative process, namely the t-th interaction between the base station and the vehicle equipment; i represents the number of training times of the vehicle equipment in the local;
Figure FDA0003935211800000014
representing a model parameter obtained by the ith local training of the vehicle equipment n in the (t + 1) th iteration process;
Figure FDA0003935211800000015
representing model parameters obtained by i-1 local training of the vehicle equipment n in the t +1 iteration process;
η t+1 representing the learning rate of the vehicle equipment for local training in the (t + 1) th iteration process;
Figure FDA0003935211800000016
indicating that the vehicle equipment n is at the t +1 th timeThe total number of samples for local training in the iterative process;
j represents the jth sample in the vehicle device's collected data set; k n Representing the number of data samples for which the vehicle device n is trained.
3. The method as claimed in claim 2, wherein the base station in step S2 obtains the effective capacitance coefficient c of the user equipment n when interacting with the equipment n CPU frequency f of user device n n And the number of CPU cycles sigma required for the user equipment n to process a parameter n Distance between user equipment n and base station; modeling the mobility of each device as a random walk model; at each time interval t, the vehicle device either stays at the current position or moves in four directions, respectively: 1) Forward; 2) Backward; 3) To the left; 4) To the right; the probability of each vehicular device n moving at each time slot t is: p is a radical of nt =[p nt,0 ,p nt,1 ,p nt,2 ,p nt,3 ,p nt,4 ]p nt,0 p nt,0 Representing the probability that the vehicle device n is stationary at the time slot t; p is a radical of formula nt,1 Representing the probability, p, that the vehicle device n moves forward in time slot t nt,2 Representing the probability that the vehicle device n moves backward at the time slot t; p is a radical of nt,3 Denotes p n Representing the probability that the vehicle device n moves to the left in the time slot t; p is a radical of formula nt,4 Representing the probability that the vehicular apparatus n moves rightward at the time slot t; the position of each vehicle device in time slot t is represented as: phi is a nt =[φ nt,1nt,2 ],φ nt,1 An abscissa representing a relative position of the vehicular apparatus to the base station at the time slot t in n; phi is a nt,1 A vertical coordinate representing the relative position of the vehicle device to the base station at the time slot t in n; assume that each device moves at a rate v n And the duration of each time slot is Δ t, the position of the vehicle device at the time t +1 is represented as:
Figure FDA0003935211800000021
the distance between the vehicular apparatus n and the base station is expressed as:
Figure FDA0003935211800000022
obtaining the parameter transmission rates of the vehicle user equipment n and the base station, which are respectively expressed as:
Figure FDA0003935211800000023
Figure FDA0003935211800000024
wherein B is u And B d Respectively representing the transmission bandwidth, N, of the vehicle user transmitting the local model training parameters to the uplink of the base station and the global model parameters to the downlink of the vehicle user 0 Representing noise power spectral density, p n And p B Respectively representing the power of the vehicle user equipment n and the base station transmission model parameters,
Figure FDA0003935211800000031
represents the channel gain, Σ, between the vehicle user equipment n and the base station n'∈N' P n' h n' Sum Σ B≠B' P B' h B' Representing the interference of the vehicle users and the base stations which do not participate in the FL algorithm to the vehicle users and the base stations which participate in the FL algorithm;
the delay for a vehicle user n to train local model parameters, transmit local model training parameters to the base station, and receive global model parameters is represented as:
Figure FDA0003935211800000032
wherein
Figure FDA0003935211800000033
Figure FDA0003935211800000034
And
Figure FDA0003935211800000035
respectively representing the local model training delay, the uplink parameter transmission delay and the downlink receiving parameter delay of a vehicle user in each iteration process, wherein the total delay is expressed as
Figure FDA0003935211800000036
Figure FDA0003935211800000037
Omega represents the local model training parameter of the vehicle user, H (omega) represents the size of the local model training parameter transmitted by the vehicle user to the base station, g represents the global model parameter, H (g) represents the size of the global model parameter broadcasted by the base station to each vehicle user, K n Representing the size of the vehicle user n local training data set;
the energy consumption of the vehicle user for training the local model and transmitting the local model training parameters to the base station in each iteration process is respectively expressed as follows:
Figure FDA0003935211800000038
the total energy consumption of each iteration process is expressed as
Figure FDA0003935211800000039
The effect of training the model in each iteration process of the vehicle equipment is represented as follows:
Figure FDA00039352118000000310
using gradient l 2 Norm to represent the importance of the parameter, as follows:
Figure FDA00039352118000000311
4. the method for optimizing federally learned efficient communications in a car networking environment according to claim 3, wherein said step S3 comprises the following substeps:
s31, communication rate of uplink of vehicle user equipment
Figure FDA00039352118000000312
Expressed as:
Figure FDA0003935211800000041
wherein R represents the total number of uplink resource blocks; r is a radical of hydrogen nm,t Indicating that the vehicle equipment n adopts the resource block m to transmit the parameters in the process of the t iteration; b is u Represents the bandwidth of the uplink; p n,t Represents the power of the vehicle device n;
Figure FDA0003935211800000042
representing the channel gain between vehicle user n and BS; n is a radical of 0 Representing a noise power spectral density; sigma n'∈N' P n' h n' Representing the interference of the vehicle users and the BS which do not participate in the FL algorithm to the vehicle users and the BS which participate in the FL algorithm;
s32, searching for the optimal matching between the user and the wireless resource block according to the uplink parameter transmission rate of the vehicle user equipment by adopting a shortest path optimization algorithm to maximize the total cost; taking the uplink communication rate of the vehicle device as a cost matrix when the vehicle device is matched with the resource block, and expressing the uplink communication rate as:
Figure FDA0003935211800000043
wherein r is nm Is a binary matrix, r nm =1, meaning that the vehicle user n corresponds to the radio resource block vector m, otherwise 0;
s43, firstly, converting the maximum cost problem into a minimum problem to solve, and processing the minimum cost problem by adopting a shortest path optimization algorithm, wherein the shortest path optimization algorithm is as follows:
let M = max C first of all,
Figure FDA0003935211800000044
the problem then turns into:
Figure FDA0003935211800000045
based on the above analysis, the FL loss function in an Internet of vehicles environment is expressed as:
Figure FDA0003935211800000046
Figure FDA0003935211800000047
Figure FDA0003935211800000048
Figure FDA00039352118000000410
Figure FDA0003935211800000049
n∈N r n,t r (5) where (3) indicates that each resource block vector can only be allocated to one user and (4) indicates that the number of vehicular user equipments is less than or equal to the number of radio resource block vectors.
5. The method for optimizing communication based on federal learning in car networking environment as claimed in claim 4, wherein said step S4 not only considers the training parameters of the current model when quantizing the parameters, but also considers the error accumulation in the previous learning process, and the quantization error in each iteration process is represented as:
Figure FDA0003935211800000051
the error accumulation is expressed as:
Figure FDA0003935211800000052
wherein, beta is expressed as a time attenuation factor and is used for preventing errors from being accumulated excessively;
as the iterative process progresses, the error is updated in real time as follows:
Figure FDA0003935211800000053
with the addition of the error feedback mechanism, the parametric quantization function is represented as:
Figure FDA0003935211800000054
wherein
Figure FDA0003935211800000055
Are independent random variables expressed as:
Figure FDA0003935211800000056
where s corresponds to the number of quantization levels, l ∈ [0, s) is such that
Figure FDA0003935211800000057
An integer of (d);
the quantized parameters are expressed as:
Figure FDA0003935211800000058
6. the method according to claim 5, wherein the vehicle user equipment participating in the iterative process in step S5 sends the quantized parameters to the base station, and the base station decodes the received gradient parameters of the vehicle user n
Figure FDA0003935211800000059
And calculate
Figure FDA00039352118000000510
Figure FDA00039352118000000511
And
Figure FDA00039352118000000512
by using
Figure FDA00039352118000000513
And measuring whether the vehicle equipment adopts the historical information to carry out a parameter aggregation process of the vehicle equipment, wherein the parameter aggregation process is represented as follows:
Figure FDA0003935211800000061
when the temperature is higher than the set temperature
Figure FDA0003935211800000062
And the time base station aggregates the training parameters of the vehicle equipment in the t-th iteration process, otherwise, the aggregated historical parameters are expressed as:
Figure FDA0003935211800000063
where N' represents the vehicle device transmitting the parameters to the base station over the wireless channel.
7. The method for optimizing communication based on federal learning in a car networking environment as claimed in claim 6, wherein said step S6 comprises the steps of:
s61, eliminating instability in the training process by adopting a quantitative optimizer to improve training efficiency, wherein the adjustment basis of the n learning rate of the vehicle equipment is as follows:
Figure FDA0003935211800000064
Figure FDA0003935211800000065
where p is used to adjust
Figure FDA0003935211800000066
To adapt to the hyper-parameters of the learning rate;
and S62, the base station updates the global model parameters by using the weighted and aggregated parameters, broadcasts the updated global model parameters to the vehicle users, and returns to the step S1.
CN202211401451.7A 2022-11-09 2022-11-09 Efficient communication method based on federal learning in Internet of vehicles environment Pending CN115623445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211401451.7A CN115623445A (en) 2022-11-09 2022-11-09 Efficient communication method based on federal learning in Internet of vehicles environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211401451.7A CN115623445A (en) 2022-11-09 2022-11-09 Efficient communication method based on federal learning in Internet of vehicles environment

Publications (1)

Publication Number Publication Date
CN115623445A true CN115623445A (en) 2023-01-17

Family

ID=84879172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211401451.7A Pending CN115623445A (en) 2022-11-09 2022-11-09 Efficient communication method based on federal learning in Internet of vehicles environment

Country Status (1)

Country Link
CN (1) CN115623445A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117812564A (en) * 2024-02-29 2024-04-02 湘江实验室 Federal learning method, device, equipment and medium applied to Internet of vehicles

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117812564A (en) * 2024-02-29 2024-04-02 湘江实验室 Federal learning method, device, equipment and medium applied to Internet of vehicles
CN117812564B (en) * 2024-02-29 2024-05-31 湘江实验室 Federal learning method, device, equipment and medium applied to Internet of vehicles

Similar Documents

Publication Publication Date Title
US20230019669A1 (en) Systems and methods for enhanced feedback for cascaded federated machine learning
Alkanhel et al. Metaheuristic Optimization of Time Series Models for Predicting Networks Traffic
CN111800828B (en) Mobile edge computing resource allocation method for ultra-dense network
CN113222179B (en) Federal learning model compression method based on model sparsification and weight quantification
CN108123828B (en) Ultra-dense network resource allocation method based on access user mobility prediction
CN113504999A (en) Scheduling and resource allocation method for high-performance hierarchical federated edge learning
CN110167176B (en) Wireless network resource allocation method based on distributed machine learning
CN114051222A (en) Wireless resource allocation and communication optimization method based on federal learning in Internet of vehicles environment
CN114885426B (en) 5G Internet of vehicles resource allocation method based on federal learning and deep Q network
Xu et al. Resource allocation algorithm based on hybrid particle swarm optimization for multiuser cognitive OFDM network
CN113395723B (en) 5G NR downlink scheduling delay optimization system based on reinforcement learning
CN113590279B (en) Task scheduling and resource allocation method for multi-core edge computing server
CN113038612B (en) Cognitive radio power control method based on deep learning
CN114745383A (en) Mobile edge calculation assisted multilayer federal learning method
CN115623445A (en) Efficient communication method based on federal learning in Internet of vehicles environment
CN110445825B (en) Super-dense network small station code cooperation caching method based on reinforcement learning
CN115796271A (en) Federal learning method based on client selection and gradient compression
Hua et al. GAN-based deep distributional reinforcement learning for resource management in network slicing
US20230199720A1 (en) Priority-based joint resource allocation method and apparatus with deep q-learning
CN114650228A (en) Federal learning scheduling method based on computation unloading in heterogeneous network
CN114219354A (en) Resource allocation optimization method and system based on federal learning
CN116321255A (en) Compression and user scheduling method for high-timeliness model in wireless federal learning
Guo et al. Predictive resource allocation with deep learning
CN108667498A (en) The available capacity optimization method of the limited lower multi-antenna transmission of feedback
Yu et al. Task delay minimization in wireless powered mobile edge computing networks: A deep reinforcement learning approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination