CN115941790A - Edge collaborative content caching method, device, equipment and storage medium - Google Patents

Edge collaborative content caching method, device, equipment and storage medium Download PDF

Info

Publication number
CN115941790A
CN115941790A CN202211329613.0A CN202211329613A CN115941790A CN 115941790 A CN115941790 A CN 115941790A CN 202211329613 A CN202211329613 A CN 202211329613A CN 115941790 A CN115941790 A CN 115941790A
Authority
CN
China
Prior art keywords
service
edge
vehicle
training
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211329613.0A
Other languages
Chinese (zh)
Inventor
徐思雅
迟靖烨
孟慧平
温鑫岩
郭少勇
邵苏杰
高峰
张博洋
谢波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
State Grid Henan Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Henan Electric Power Co Ltd
Original Assignee
Beijing University of Posts and Telecommunications
State Grid Henan Electric Power Co Ltd
Information and Telecommunication Branch of State Grid Henan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications, State Grid Henan Electric Power Co Ltd, Information and Telecommunication Branch of State Grid Henan Electric Power Co Ltd filed Critical Beijing University of Posts and Telecommunications
Priority to CN202211329613.0A priority Critical patent/CN115941790A/en
Publication of CN115941790A publication Critical patent/CN115941790A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method, a device, equipment and a storage medium for caching edge collaborative content, wherein the method comprises the following steps: acquiring service preference data of each service vehicle in a collaborative cache domain; inputting the historical preference data into a federal prediction model to obtain a component demand prediction result; the federal prediction model is obtained by carrying out federal learning training on historical preference data of each service vehicle in the edge node in the cooperative cache domain; and calculating to obtain a global optimal pre-caching strategy based on the component demand prediction result so as to cache the pre-caching components of the global optimal pre-caching strategy to the edge node. According to the method, the vehicle service component requirements are predicted through the federal prediction model, the leakage of vehicle privacy data is effectively avoided, and the optimal pre-caching strategy is calculated and obtained based on the component requirement prediction result so that the service components are cached in the edge nodes in advance, so that the service components cached in the edge nodes can be delivered to the vehicle quickly when a vehicle service request is received.

Description

Edge collaborative content caching method, device, equipment and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method, an apparatus, a device, and a storage medium for caching edge collaborative content.
Background
With the development of technologies such as artificial intelligence and intelligent transportation, the demands of high-speed mobile services such as driving assistance and intelligent transportation on the computational efficiency and reliability of a carrier network are gradually improved. The time delay sensitive and calculation intensive services such as driving assistance and the like need to accurately judge and predict the road condition in real time on the basis of a low time delay and a safe and reliable communication link, and meanwhile, the privacy of users is considered.
With the rapid development of the internet of vehicles, vehicles generate a large amount of computing services which need to be processed locally to assist driving, at present, all service components required by computing tasks are cached on a cloud server in a traditional centralized content caching mode, and due to the fact that vehicles in the internet of vehicles have the characteristics of large quantity, complex moving track, various required services, time delay sensitivity and the like, high end-to-end transmission time delay can be generated in the process of obtaining the service components, service requirements of a large amount of mobile services are difficult to meet, and when the future potential service requirements of the vehicles are predicted, the characteristics of the vehicles in different areas are various, and privacy leakage risks exist in data transmission,
disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for caching edge collaborative content, and aims to improve the security of data transmission while meeting the service requirements of a large number of mobile services.
The invention provides an edge collaborative content caching method, which comprises the following steps:
acquiring service preference data of each service vehicle in a collaborative cache domain;
inputting each business preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by carrying out federated learning training based on historical preference data of each service vehicle in the edge nodes in the collaborative cache domain;
and performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
Optionally, according to the edge collaborative content caching method provided by the present invention, the federal prediction model is obtained by training based on the following steps:
selecting a plurality of training vehicles from each service vehicle in the cooperative cache domain for any edge node in the cooperative cache domain;
issuing the training models in the edge nodes and the global model parameters of the training models to the training vehicles so that each training vehicle can carry out iterative training on the training models based on the corresponding historical preference data and the global model parameters to obtain local models, and uploading the local model parameters of the local models to the edge nodes;
aggregating the local model parameters of the training vehicles to obtain aggregated model parameters;
updating the global model parameters of the training model based on the aggregation model parameters, and issuing the updated global model parameters to each target vehicle, wherein the target vehicles upload the local model parameters to the training vehicles in the edge nodes;
and carrying out a new round of model training on each target training vehicle based on the updated global model parameters until the training model of the edge node converges to obtain the federal prediction model.
Optionally, according to the edge collaborative content caching method provided by the present invention, the aggregating the local model parameters of each of the training vehicles to obtain aggregated model parameters includes:
calculating a weight proportion corresponding to each training vehicle based on historical preference data corresponding to each training vehicle;
and aggregating based on the weight proportion corresponding to each training vehicle and the local model parameters to obtain the aggregated model parameters.
Optionally, according to the edge collaborative content caching method provided by the present invention, the performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy includes:
setting a state space, an action space and a cache reward function of the cooperative cache domain;
counting the current vehicle density state of each edge node in the cooperative cache domain;
and calculating to obtain the global optimal pre-caching strategy by combining the state space, the action space and the caching reward function based on the component demand prediction result and the current vehicle density state.
Optionally, before the obtaining of the service preference data of each service vehicle in the collaborative cache domain, the method for caching edge collaborative content according to the present invention further includes:
constructing an edge cooperation cache domain model;
acquiring the coordinate position of each edge node in the edge cooperative cache domain model;
clustering the edge nodes based on the coordinate positions of the edge nodes to obtain a plurality of cooperative cache domains; wherein the edge node is communicatively coupled to each service vehicle within the edge node coverage area.
Optionally, according to the edge collaborative content caching method provided by the present invention, after performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching policy, so as to cache a pre-caching component corresponding to the global optimal pre-caching policy in an edge node, the method further includes:
acquiring a service request sent by a target request vehicle in the cooperative cache domain;
and delivering the request service component corresponding to the service request to the target request vehicle according to a preset delivery strategy based on each pre-cache service component stored by each edge node.
Optionally, according to the edge collaborative content caching method provided by the present invention, if the request service component is stored in the target edge node that receives the service request, the request service component in the target edge node is delivered to the target request vehicle;
if the target edge node does not store the request service component, inquiring whether other edge nodes in the cooperative cache domain except the target edge node store the request service component or not;
if so, sending a component auxiliary request to the edge node corresponding to the request service component to obtain the request service component, and delivering the request service component to the target request vehicle;
and if not, sending a service component request to the rest cooperative cache domains in the edge cooperative cache domain model to obtain the request service component, and delivering the request service component to the target request vehicle.
The invention also provides an edge collaborative content caching device, which comprises:
the acquisition module is used for acquiring the service preference data of each service vehicle in the collaborative cache domain;
the demand prediction module is used for inputting the business preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by carrying out federated learning training based on historical preference data of each service vehicle in the edge nodes in the collaborative cache domain;
and the computing module is used for carrying out reinforcement learning computation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
The present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements any one of the above edge collaborative content caching methods when executing the program.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements an edge collaborative content caching method as described in any of the above.
The present invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the edge collaborative content caching method as described in any of the above.
According to the edge collaborative content caching method, the device, the equipment and the storage medium, the future vehicle service component requirements are predicted through the Federal prediction model obtained based on Federal learning training, the leakage of vehicle privacy data can be effectively avoided, the reinforcement learning calculation is carried out based on the component requirement prediction result, the global optimal pre-caching strategy is obtained, the service components are cached in each edge node in advance according to the global optimal pre-caching strategy, and therefore when the service request of the vehicle is received, the service components cached in advance by the edge nodes can be delivered to the vehicle quickly, and the service reliability and the service efficiency of the Internet of vehicles are improved comprehensively.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow diagram of an edge collaborative content caching method according to the present invention;
fig. 2 is a schematic structural diagram of an edge collaborative content caching apparatus provided in the present invention;
fig. 3 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the one or more embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the invention. As used in one or more embodiments of the present invention, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present invention refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used herein to describe various information in one or more embodiments of the present invention, such information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present invention. The word "if" as used herein may be interpreted as "at \8230; \8230when" or "when 8230; \8230when", depending on the context.
An exemplary embodiment of the present invention is described in detail below with reference to fig. 1.
Fig. 1 is a schematic flow chart of an edge collaborative content caching method provided by the present invention. As shown in fig. 1, the edge collaborative content caching method includes:
step 11, acquiring service preference data of each service vehicle in the collaborative cache domain;
it should be noted that the cooperative cache domain includes a plurality of edge nodes; the edge nodes are provided with corresponding coverage ranges, and the edge nodes are in communication connection with all the service vehicles within the coverage ranges of the edge nodes. The service preference data includes data such as frequency of requests by the service vehicle for different services. Specifically, the service preference data of each service vehicle in each edge node is acquired by using each edge node in the cooperative cache domain.
Step 12, inputting the service preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by carrying out federated learning training based on historical preference data of each service vehicle in the edge nodes in the collaborative cache domain;
specifically, each service preference data in the edge node is input into a federal prediction model corresponding to the edge node, so as to determine a future component demand prediction result in the cooperation cache domain according to an output result of the federal prediction model, wherein the federal prediction model is obtained by performing federal learning training based on historical preference data of each service vehicle in the edge node in the cooperation cache domain, and the trained federal prediction model can predict future service component demands in the cooperation cache domain based on the service preference data of the current service vehicle, so that the leakage of vehicle privacy data can be effectively avoided, and the security of the privacy data is improved.
And step 13, performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy, so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
It should be noted that the pre-caching component represents a service component that needs to be cached to an edge node, and the global optimal pre-caching policy includes each pre-caching component and an edge node corresponding to each pre-caching component. The reinforcement learning calculation is a calculation method for calculating an optimal pre-caching strategy based on a PPO algorithm (Proximal Policy Optimization).
Specifically, by defining a state space, an action space and a cache reward function of a cooperative cache domain, the action space comprises cache actions corresponding to which service components need to be cached and placement actions corresponding to which edge nodes where the service components are placed in the cooperative cache domain, and then the current vehicle density state of each edge node in the cooperative cache domain is counted, so that based on the component demand prediction result and the current vehicle density state, the earnings brought by caching of different service components in the caching process are calculated through the cache reward function in combination with the state space and the action space, and therefore the global optimal pre-caching strategy is finally worked out.
According to the embodiment of the invention, the future vehicle service component requirements are predicted through the federal prediction model obtained based on the federal learning training, the leakage of vehicle privacy data can be effectively avoided, the reinforcement learning calculation is carried out based on the component requirement prediction result, the global optimal pre-caching strategy is obtained, and the service components are cached in each edge node in advance according to the global optimal pre-caching strategy, so that the service components cached in advance in the edge nodes can be delivered to the vehicle quickly when the service request of the vehicle is received, and the service reliability and the service efficiency of the internet of vehicles are improved comprehensively.
In one embodiment of the invention, the federal prediction model is obtained by training based on the following steps:
selecting a plurality of training vehicles from the service vehicles in the cooperative cache domain aiming at each edge node in the cooperative cache domain; issuing the training models in the edge nodes and the global model parameters of the training models to the training vehicles so that each training vehicle can carry out iterative training on the training models based on the corresponding historical preference data and the global model parameters to obtain local models, and uploading the local model parameters of the local models to the edge nodes; aggregating the local model parameters of the training vehicles to obtain aggregated model parameters; updating the global model parameters of the training model based on the aggregation model parameters, and issuing the updated global model parameters to each target vehicle, wherein the target vehicles upload the local model parameters to the training vehicles in the edge nodes; and carrying out a new round of model training on each target training vehicle based on the updated global model parameters until the training model of the edge node converges to obtain the federal prediction model.
Specifically, the following steps are executed for any edge node in the cooperative cache domain:
firstly, a plurality of training vehicles for model training are selected from the service vehicles in the collaborative cache domain, preferably, the service vehicles with stronger computing capability and more stable channels are selected to participate in the federal learning training process, and the number of the selected training vehicles can be set based on actual conditions, which is not described in detail herein. And then issuing the training models in the edge nodes and the global model parameters of the training models to the training vehicles so as to perform local iterative training on the training models through the training vehicles based on historical preference data corresponding to the training vehicles and the global model parameters issued by the edge nodes, wherein the historical preference data comprises data such as request frequency of service vehicles for different services. The specific process of the local iterative training of the service vehicle is as follows: inputting historical preference data into a training model to obtain a prediction result output by the training model, and further calculating by using a preset loss function algorithm to obtain a model loss value of the iteration of the current round based on the prediction result, wherein the preset loss function algorithm comprises an L1 loss function, a dice loss function and the like, after the model loss value is obtained by calculation, updating global model parameters in the training model by using an error back propagation algorithm, ending the training process, and then performing the next training. And in the training process, judging whether the updated training models meet preset training ending conditions or not, if so, taking the updated training models as local models corresponding to the training vehicles, and if not, continuing to train the models, wherein the preset training ending conditions comprise loss convergence, maximum iteration time threshold value reaching and the like.
Further, a target vehicle capable of finishing training within a predetermined time is obtained through screening, that is, a training vehicle incapable of finishing training on time cannot participate in the next round of training, local model parameters of a local model in the target vehicle are uploaded to the edge node, and then the local model parameters of each target vehicle are aggregated to obtain aggregated model parameters, as an implementable embodiment, an average value of the local model parameters of each target vehicle is obtained, and as another implementable embodiment, model parameter aggregation is performed by using a FedAvg algorithm, specifically: determining the data volume of historical preference data used by each target vehicle in training of a local model of each target vehicle, counting the total data volume of all target vehicles for model training, calculating the proportion of the data volume of each target vehicle in the total data volume, taking the proportion of the target vehicles as the weight proportion of local model parameters corresponding to the target vehicles, further aggregating the weight proportion of each target vehicle and the local model parameters to obtain aggregated model parameters, and considering the difference of data contribution of each target vehicle in a collaborative cache area, thereby effectively improving the accuracy of model prediction.
Furthermore, based on the aggregated model parameters, global model parameters of the training model are updated, and the updated global model parameters are issued to each target vehicle, so that each target training vehicle performs a new round of model training based on the updated global model parameters until the training model of the edge node converges, and the federal prediction model is obtained.
According to the embodiment of the invention, the vehicles with high computing power and stable channels in the collaborative cache domain are selected to participate in the federal learning training, the vehicles which finish the training on time are screened out in the training process to continue the training, and finally, the weight proportion is calculated according to the data quantity used in the vehicle training, so that model parameter aggregation is carried out based on the weight proportion, the difference of data contribution of each target vehicle in the collaborative cache domain can be considered, and the accuracy of the model for predicting the vehicle service component requirements in the collaborative cache domain is improved.
In one embodiment of the present invention, step S13: performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy, wherein the method comprises the following steps:
setting a state space, an action space and a cache reward function of the cooperative cache domain; counting the current vehicle density state of each edge node in the cooperative cache domain; and calculating to obtain the global optimal pre-caching strategy based on the component demand prediction result and the current vehicle density state by combining the state space, the action space and the cache reward function.
It should be noted that the motion space expression is: a (t) = { a cache (t),a put (t) }, in which, a cache (t) shows a buffer action at time t, a put (t) represents the placing action at time t.
The expression of the state space:
Figure BDA0003913075750000101
wherein it is present>
Figure BDA0003913075750000102
Representing the current memory space state of the edge node in the cooperative cache domain at the time t, device for selecting or keeping>
Figure BDA0003913075750000103
Representing a component demand prediction result of the edge node in the cooperative cache domain at the time t; />
Figure BDA0003913075750000104
And the current vehicle density state of the edge node in the cooperative cache domain at the time t is represented, and under the normal condition, the state space of the edge node meets the Poisson distribution.
The expression of the cache reward function:
Figure BDA0003913075750000105
wherein it is present>
Figure BDA0003913075750000106
Latency cost, representing cache service components>
Figure BDA0003913075750000107
Representing the penalty of placing service components in edge nodes.
Specifically, a state space, an action space and a cache reward function of the cooperative cache domain are defined, current vehicle density states of all edge nodes in the cooperative cache domain are counted, expected gains brought by caching different service components are calculated based on the component demand prediction result and the current vehicle density states in combination with the state space, the action space and the cache reward function, and accordingly the advantages and the disadvantages of the pre-caching strategy are evaluated, and the global optimal pre-caching strategy is obtained.
According to the embodiment of the invention, expected benefits brought by caching different service components are calculated based on the component demand prediction result and the current vehicle density state, so that the advantages and disadvantages of the pre-caching strategy are evaluated, the global optimal pre-caching strategy is obtained, the service components are cached into the edge node according to the optimal pre-caching strategy, and the resource utilization capacity and the service reliability of the Internet of vehicles are comprehensively improved.
In one embodiment of the invention, at step 11: before obtaining the service preference data of each service vehicle in the collaborative cache domain, the method further comprises the following steps:
constructing an edge cooperation cache domain model; acquiring the coordinate position of each edge node in the edge cooperative cache domain model; clustering the edge nodes based on the coordinate positions of the edge nodes to obtain a plurality of cooperative cache domains; wherein the edge node is communicatively coupled to each service vehicle within the edge node coverage area.
It should be noted that the edge collaborative cache domain model is formed by each edge node, the coordinate position may be a longitude and latitude position of the edge node, or a certain position may be preset as a coordinate origin to construct a coordinate system, and the directions of the x axis and the y axis of the coordinate system may be set according to actual situations, for example: and (4) selecting the positive east direction as the positive direction of the x axis and the positive north direction as the positive direction of the y axis, and calculating the coordinate position of each edge node on the coordinate position.
Specifically, an edge cooperation cache domain model is firstly constructed, the coordinate position of each edge node in the edge cooperation cache domain model is obtained, and further, aggregation classification is carried out on the basis of the coordinate position of each edge node, so that a plurality of cooperation cache domains are obtained. As an implementable embodiment: and calculating the distance between the edge nodes based on the coordinate positions of the edge nodes, and further aggregating the edge nodes with the distance not exceeding a first preset distance threshold into one class to form a cooperative cache domain.
As another possible implementation: a plurality of initial nodes can be selected from all the edge nodes, wherein the number of the initial nodes can determine the number of the cooperative cache domains, the distance between each edge node and the initial node is calculated respectively, each edge node, the distance between which and the initial node is not more than a second preset distance threshold value, is determined so as to calculate the node distance between each edge node, and each edge node, the node distance of which is not more than a third preset distance threshold value, and the initial node are clustered to form one cooperative cache domain. In addition, in order to avoid the excessive number of the edge nodes in the cooperative cache domain, an upper limit number threshold of the edge nodes in the cooperative cache domain can be preset, so that each edge node which is close in distance and meets the upper limit number threshold is aggregated to form the cooperative cache domain.
In addition, after a plurality of cooperative cache domains are formed, one node is selected from each edge node in the cooperative cache domains as a domain head node, and the node can be selected randomly or according to parameter information such as bandwidth, storage space and vehicle density of the edge nodes.
According to the scheme, the edge nodes are dynamically clustered into the cooperative cache domains, so that services are provided for vehicles in different cooperative cache domains, and the service capacity and stability of the cooperative cache domains are balanced.
In an embodiment of the present invention, after performing reinforcement learning calculation on the prediction result of the component demand to obtain a global optimal pre-caching policy, so as to cache a pre-caching component corresponding to the global optimal pre-caching policy in an edge node, the method further includes:
acquiring a service request sent by a target request vehicle in the cooperative cache domain; and delivering the requested service component corresponding to the service request to the target requested vehicle according to a preset delivery strategy based on each pre-cache service component stored by each edge node.
The delivering, based on each pre-cached service component stored by each edge node, a requested service component corresponding to the service request to the target requesting vehicle according to a preset delivery policy includes:
if the target edge node receiving the service request stores the service request component, delivering the service request component in the target edge node to the target request vehicle; if the target edge node does not store the request service component, inquiring whether other edge nodes in the cooperative cache domain except the target edge node store the request service component or not; if so, sending a component auxiliary request to the edge node corresponding to the request service component to obtain the request service component, and delivering the request service component to the target request vehicle; and if not, sending a service component request to the rest cooperative cache domains in the edge cooperative cache domain model to obtain the request service component, and delivering the request service component to the target request vehicle.
It should be noted that the service request includes a service component that needs to be requested, specifically, when the target edge node receives a service request sent by a target requesting vehicle within the coverage of the target edge node, a requested service component that is needed by the service request needs to be delivered to the target requesting vehicle, and in the delivery process, there are three cases:
in the first case: and each pre-caching service component in the target edge node comprises the request service component, and the request service component is delivered to the target request vehicle directly through the target edge node.
In the second case: and if so, sending a component auxiliary request to a domain head node in the cooperative cache domain, forwarding the component auxiliary request to the edge node corresponding to the request service component stored in the cooperative cache domain through the domain head node to obtain the request service component, and delivering the request service component to the target request vehicle.
In the third case: the method comprises the steps that each pre-caching service component in a target edge node does not have the request service component, other edge nodes in a cooperative caching domain do not have the request service components, then the target edge node sends a service component request to a domain head node in the cooperative caching domain, so that the domain head node in the cooperative caching domain sends a service component request to the rest cooperative caching domains in an edge cooperative caching domain model or a preset central server, wherein the central server is in communication connection with the domain head node of each cooperative caching domain to obtain the request service component, and the request service component is delivered to a target request vehicle.
According to the embodiment of the invention, based on the service request sent by the target request vehicle, the request service components corresponding to the service request can be searched in each edge node in the cooperative cache domain and among the cooperative cache domains, so that the efficiency of edge cache service is improved, and the resource utilization capability and the service reliability of the Internet of vehicles are comprehensively improved.
The following describes the edge collaborative content caching apparatus provided by the present invention, and the edge collaborative content caching apparatus described below and the edge collaborative content caching method described above may be referred to in correspondence with each other.
Fig. 2 is a schematic structural diagram of an edge collaborative content caching apparatus provided by the present invention, and as shown in fig. 2, an edge collaborative content caching apparatus according to an embodiment of the present invention includes:
the acquiring module 21 is configured to acquire service preference data of each service vehicle in the collaborative cache domain;
the demand forecasting module 22 is used for inputting the business preference data into a federal forecasting model to obtain a component demand forecasting result output by the federal forecasting model; the federated prediction model is obtained by carrying out federated learning training based on historical preference data of each service vehicle in the edge nodes in the collaborative cache domain;
the calculating module 23 is configured to perform reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy, so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
The edge collaborative content caching apparatus is further configured to:
selecting a plurality of training vehicles from each service vehicle in the cooperative cache domain for any edge node in the cooperative cache domain;
issuing the training models in the edge nodes and the global model parameters of the training models to the training vehicles so that each training vehicle can carry out iterative training on the training models based on the corresponding historical preference data and the global model parameters to obtain local models, and uploading the local model parameters of the local models to the edge nodes;
aggregating the local model parameters of the training vehicles to obtain aggregated model parameters;
updating the global model parameters of the training model based on the aggregation model parameters, and issuing the updated global model parameters to each target vehicle, wherein the target vehicles upload the local model parameters to the training vehicles in the edge nodes;
and performing a new round of model training on each target training vehicle based on the updated global model parameters until the training model of the edge node is converged to obtain the federal prediction model.
The edge collaborative content caching apparatus is further configured to:
calculating the weight proportion corresponding to each training vehicle based on the historical preference data corresponding to each training vehicle;
and aggregating based on the weight proportion corresponding to each training vehicle and the local model parameters to obtain the aggregated model parameters.
The calculation module 23 is further configured to:
defining a state space, an action space and a cache reward function of the cooperative cache domain;
counting the current vehicle density state of each edge node in the cooperative cache domain;
and calculating to obtain the global optimal pre-caching strategy based on the component demand prediction result and the current vehicle density state by combining the state space, the action space and the cache reward function.
The edge collaborative content caching apparatus is further configured to:
constructing an edge cooperation cache domain model;
acquiring the coordinate position of each edge node in the edge cooperation cache domain model;
clustering the edge nodes based on the coordinate positions of the edge nodes to obtain a plurality of cooperative cache domains; wherein the edge node is communicatively coupled to each service vehicle within the edge node coverage area.
The edge collaborative content caching apparatus is further configured to:
acquiring a service request sent by a target request vehicle in the cooperative cache domain;
and delivering the requested service component corresponding to the service request to the target requested vehicle according to a preset delivery strategy based on each pre-cache service component stored by each edge node.
The edge collaborative content caching apparatus is further configured to:
if the target edge node receiving the service request stores the service request component, delivering the service request component in the target edge node to the target request vehicle;
if the target edge node does not store the request service component, inquiring whether other edge nodes in the cooperative cache domain except the target edge node store the request service component or not;
if so, sending a component auxiliary request to the edge node corresponding to the request service component to obtain the request service component, and delivering the request service component to the target request vehicle;
and if not, sending a service component request to the rest cooperative cache domains in the edge cooperative cache domain model to obtain the request service component, and delivering the request service component to the target request vehicle.
It should be noted that, the apparatus provided in the embodiment of the present invention can implement all the method steps implemented by the method embodiment and achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as the method embodiment in this embodiment are omitted here.
Fig. 3 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 3, the electronic device may include: a processor (processor) 310, a memory (memory) 320, a communication Interface (Communications Interface) 330 and a communication bus 340, wherein the processor 310, the memory 320 and the communication Interface 330 communicate with each other via the communication bus 340. The processor 310 may call logic instructions in the memory 320 to perform an edge collaborative content caching method comprising: acquiring service preference data of each service vehicle in a collaborative cache domain; inputting each business preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by carrying out federated learning training based on historical preference data of each service vehicle in the edge nodes in the collaborative cache domain; and performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
In addition, the logic instructions in the memory 320 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the edge collaborative content caching method provided by the above methods, the method including: acquiring service preference data of each service vehicle in a collaborative cache domain; inputting each service preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by performing federated learning training based on historical preference data of each service vehicle in the edge node in the collaborative cache domain; and performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
In another aspect, the present invention further provides a computer program product, the computer program product including a computer program, the computer program being stored on a non-transitory computer-readable storage medium, wherein when the computer program is executed by a processor, the computer is capable of executing the edge collaborative content caching method provided by the above methods, and the method includes: acquiring service preference data of each service vehicle in a collaborative cache domain; inputting each service preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by carrying out federated learning training based on historical preference data of each service vehicle in the edge nodes in the collaborative cache domain; and performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An edge collaborative content caching method, comprising:
acquiring service preference data of each service vehicle in a collaborative cache domain;
inputting each business preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by performing federated learning training based on historical preference data of each service vehicle in the edge node in the collaborative cache domain;
and performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
2. The edge collaborative content caching method according to claim 1, wherein the federal prediction model is obtained by training based on the following steps:
selecting a plurality of training vehicles from each service vehicle in the cooperative cache domain for any edge node in the cooperative cache domain;
issuing the training models in the edge nodes and the global model parameters of the training models to the training vehicles so that each training vehicle can carry out iterative training on the training models based on the corresponding historical preference data and the global model parameters to obtain local models, and uploading the local model parameters of the local models to the edge nodes;
aggregating the local model parameters of the training vehicles to obtain aggregated model parameters;
updating the global model parameters of the training model based on the aggregation model parameters, and issuing the updated global model parameters to each target vehicle, wherein the target vehicles upload the local model parameters to the training vehicles in the edge nodes;
and carrying out a new round of model training on each target training vehicle based on the updated global model parameters until the training model of the edge node converges to obtain the federal prediction model.
3. The edge collaborative content caching method according to claim 2, wherein the aggregating the local model parameters of each of the trained vehicles to obtain an aggregated model parameter includes:
calculating a weight proportion corresponding to each training vehicle based on historical preference data corresponding to each training vehicle;
and aggregating based on the weight proportion corresponding to each training vehicle and the local model parameters to obtain the aggregated model parameters.
4. The edge collaborative content caching method according to claim 1, wherein the performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy comprises:
defining a state space, an action space and a cache reward function of the cooperative cache domain;
counting the current vehicle density state of each edge node in the cooperative cache domain;
and calculating to obtain the global optimal pre-caching strategy based on the component demand prediction result and the current vehicle density state by combining the state space, the action space and the cache reward function.
5. The edge collaborative content caching method according to claim 1, further comprising, before the obtaining of the business preference data of each service vehicle in the collaborative caching domain:
constructing an edge cooperation cache domain model;
acquiring the coordinate position of each edge node in the edge cooperative cache domain model;
clustering the edge nodes based on the coordinate positions of the edge nodes to obtain a plurality of cooperative cache domains; wherein the edge node is communicatively coupled to each service vehicle within the edge node coverage area.
6. The edge collaborative content caching method according to claim 5, wherein after performing reinforcement learning calculation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache a pre-caching component corresponding to the global optimal pre-caching strategy into an edge node, the method further comprises:
acquiring a service request sent by a target request vehicle in the cooperative cache domain;
and delivering the requested service component corresponding to the service request to the target requested vehicle according to a preset delivery strategy based on each pre-cache service component stored by each edge node.
7. The edge collaborative content caching method according to claim 6, wherein the delivering a requested service component corresponding to the service request to the target requesting vehicle according to a preset delivery policy based on each pre-cached service component stored by each edge node comprises:
if the target edge node receiving the service request stores the service request component, delivering the service request component in the target edge node to the target request vehicle;
if the target edge node does not store the request service component, inquiring whether other edge nodes in the cooperative cache domain except the target edge node store the request service component or not;
if so, sending a component auxiliary request to the edge node corresponding to the request service component to obtain the request service component, and delivering the request service component to the target request vehicle;
and if not, sending a service component request to the rest cooperative cache domains in the edge cooperative cache domain model to obtain the request service component, and delivering the request service component to the target request vehicle.
8. An edge collaborative content caching apparatus, comprising:
the acquisition module is used for acquiring the service preference data of each service vehicle in the collaborative cache domain;
the demand prediction module is used for inputting the business preference data into a federal prediction model to obtain a component demand prediction result output by the federal prediction model; the federated prediction model is obtained by performing federated learning training based on historical preference data of each service vehicle in the edge node in the collaborative cache domain;
and the computing module is used for carrying out reinforcement learning computation on the component demand prediction result to obtain a global optimal pre-caching strategy so as to cache the pre-caching component corresponding to the global optimal pre-caching strategy into the edge node.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the edge collaborative content caching method according to any one of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the edge collaborative content caching method according to any one of claims 1 to 7.
CN202211329613.0A 2022-10-27 2022-10-27 Edge collaborative content caching method, device, equipment and storage medium Pending CN115941790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211329613.0A CN115941790A (en) 2022-10-27 2022-10-27 Edge collaborative content caching method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211329613.0A CN115941790A (en) 2022-10-27 2022-10-27 Edge collaborative content caching method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115941790A true CN115941790A (en) 2023-04-07

Family

ID=86698272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211329613.0A Pending CN115941790A (en) 2022-10-27 2022-10-27 Edge collaborative content caching method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115941790A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116915781A (en) * 2023-09-14 2023-10-20 南京邮电大学 Edge collaborative caching system and method based on blockchain
CN117156008A (en) * 2023-09-14 2023-12-01 北京宝联之星科技股份有限公司 Data caching method and system of edge node and readable storage medium
CN117713382A (en) * 2024-02-04 2024-03-15 北京中电飞华通信有限公司 Distributed power service providing method and distributed power service system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116915781A (en) * 2023-09-14 2023-10-20 南京邮电大学 Edge collaborative caching system and method based on blockchain
CN117156008A (en) * 2023-09-14 2023-12-01 北京宝联之星科技股份有限公司 Data caching method and system of edge node and readable storage medium
CN116915781B (en) * 2023-09-14 2023-12-12 南京邮电大学 Edge collaborative caching system and method based on blockchain
CN117156008B (en) * 2023-09-14 2024-03-22 北京宝联之星科技股份有限公司 Data caching method and system of edge node and readable storage medium
CN117713382A (en) * 2024-02-04 2024-03-15 北京中电飞华通信有限公司 Distributed power service providing method and distributed power service system
CN117713382B (en) * 2024-02-04 2024-05-07 北京中电飞华通信有限公司 Distributed power service providing method and distributed power service system

Similar Documents

Publication Publication Date Title
CN115941790A (en) Edge collaborative content caching method, device, equipment and storage medium
CN109391681B (en) MEC-based V2X mobility prediction and content caching offloading scheme
CN114116198A (en) Asynchronous federal learning method, system, equipment and terminal for mobile vehicle
CN113435472A (en) Vehicle-mounted computing power network user demand prediction method, system, device and medium
CN114205782B (en) Optimal time delay caching and routing method, device and system based on cloud edge cooperation
CN113315978A (en) Collaborative online video edge caching method based on federal learning
CN114546608A (en) Task scheduling method based on edge calculation
CN114978908A (en) Computing power network node evaluation and operation method and device
CN111770152B (en) Edge data management method, medium, edge server and system
CN109948803A (en) Algorithm model optimization method, device and equipment
CN111510473B (en) Access request processing method and device, electronic equipment and computer readable medium
CN116112981A (en) Unmanned aerial vehicle task unloading method based on edge calculation
CN114979145B (en) Content distribution method integrating sensing, communication and caching in Internet of vehicles
CN116228206A (en) Data center operation and maintenance management method and device, electronic equipment and operation and maintenance management system
CN116405493A (en) Edge cloud collaborative task unloading method based on MOGWO strategy
CN113596138B (en) Heterogeneous information center network cache allocation method based on deep reinforcement learning
CN116260821A (en) Distributed parallel computing unloading method based on deep reinforcement learning and blockchain
CN108770014A (en) Calculating appraisal procedure, system, device and the readable storage medium storing program for executing of network server
CN114401192A (en) Multi-SDN controller collaborative training method
Khanal et al. Proactive content caching at self-driving car using federated learning with edge cloud
CN117713382B (en) Distributed power service providing method and distributed power service system
Bai et al. A new cooperative cache optimization algorithm for Internet of vehicles based on edge cloud network
CN115696296B (en) Active edge caching method based on community discovery and weighted federation learning
Chauhan et al. D2PG: Deep Deterministic Policy Gradient-Based Vehicular Edge Caching Scheme for Digital Twin-Based Vehicular Networks
CN115051999B (en) Energy consumption optimal task unloading method, device and system based on cloud edge cooperation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination