CN116566838A - Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation - Google Patents

Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation Download PDF

Info

Publication number
CN116566838A
CN116566838A CN202310700813.0A CN202310700813A CN116566838A CN 116566838 A CN116566838 A CN 116566838A CN 202310700813 A CN202310700813 A CN 202310700813A CN 116566838 A CN116566838 A CN 116566838A
Authority
CN
China
Prior art keywords
task
content
energy consumption
vehicle
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310700813.0A
Other languages
Chinese (zh)
Inventor
李云
陈振涵
鲜永菊
左琳立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202310700813.0A priority Critical patent/CN116566838A/en
Publication of CN116566838A publication Critical patent/CN116566838A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention belongs to the technical field of mobile communication, and particularly relates to a method for unloading and caching content of a vehicle networking task by combining blockchain and edge calculation; the method comprises the following steps: constructing a vehicle networking system model; constructing a task unloading decision model, a content caching decision model and a blockchain model based on the vehicle networking system model; constructing a minimum system energy consumption and overhead optimization problem according to a task unloading decision model, a content caching decision model and a blockchain model; solving the energy consumption and overhead optimization problem of the minimized system by adopting an A3C algorithm to obtain a task unloading and content caching scheme of the Internet of vehicles; the invention has remarkable advantages in reducing system time delay, reducing system energy consumption and calculation cost, improving the service quality of users and simultaneously having lower complexity.

Description

Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation
Technical Field
The invention belongs to the technical field of mobile communication, and particularly relates to a method for unloading tasks and caching contents of a vehicle networking with cooperative blockchain and edge calculation.
Background
With the rapid development of mobile communication technology, intelligent transportation systems (intelligent transportation systems, ITS) are receiving global attention. The Internet of vehicles serves as an emerging industry in the aspects of intelligent transportation and intelligent city development, and is an important field for the development of various countries due to the potential value of the Internet of vehicles. Meanwhile, the rapid development of the Internet of vehicles promotes the rise of a large number of intelligent vehicle-mounted applications. Through On Board Units (OBUs) mounted on vehicles, different types of applications may be provided for vehicle users, covering information services, driving safety, traffic management, etc., such as road traffic warnings, autopilot, charging information feedback, path planning, and entertainment provision (e.g., online video, social networking, etc.).
On the technical level, the internet of vehicles realizes the interconnection between vehicles and Road Side Units (RSUs), and provides possibility for supporting massive task request processing and application service delivery. With the increasing use of applications, the demand for content grows exponentially, not only increasing network load, but also requiring complex computing power and huge storage capacity, which may not be met by low-latency and diversified application requirements of content if the content is obtained from a remote data center; meanwhile, the distance between the vehicle and the cloud server is relatively long, and the capacity of the backhaul link is limited, which provides a great challenge for large-scale content delivery and meeting the low-delay requirement of the internet of vehicles. A new technique that addresses this challenge is edge caching, which can utilize storing content at the network edge to reduce delays in the content delivery process. However, the storage capacity of the edge node is limited, and the entire content cannot be stored, so that the cache scheme is particularly important to study in order to fully utilize the storage capacity of the edge node. In particular, content caching is considered a key way to reduce data traffic, as it enables server nodes to store part of the popular content, reducing delays and congestion in the network.
The vehicle-mounted edge calculation is taken as a novel calculation model, and by sinking calculation and storage resources to the edge end close to a vehicle user, the bandwidth pressure of a network can be greatly relieved, the response time delay of a task is effectively reduced, and the communication overhead is reduced. In a complex vehicle network environment, in order to meet the diversified service demands of a large number of users, a more effective internet-of-vehicles edge computing mechanism needs to be designed. The base station with abundant computing and buffering resources serves as an access point of the network edge, coordinates task unloading and content buffering of the network edge, and is beneficial to reduction of response delay. However, in the existing vehicle-mounted computing and unloading work, the user side is often concentrated on the vehicle, the resources of the adjacent vehicles are not utilized, and the management of computing resources is excessively focused on the edge side, so that the relationship between the computing resources and the content cache is ignored. It should be noted that, in order to handle the task offloaded by the user, the edge server needs to have a certain computing resource, and also needs to cache the corresponding content application in advance. Thus, task offloading and content caching are associated with each other, coupled with each other. Due to limited storage resources of roadside base stations, how to cooperate with task offloading and content caching in a vehicle-mounted edge system is an important problem to be solved. According to the dynamic, random and time-varying characteristics of the Internet of vehicles, a more intelligent algorithm needs to be introduced to realize effective management of network communication, calculation and buffer resources so as to cope with the defects of the traditional mathematical method.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a method for unloading tasks and caching contents of the Internet of vehicles by combining block chains and edge calculation, which comprises the following steps:
s1: constructing a vehicle networking system model;
s2: constructing a task unloading decision model, a content caching decision model and a blockchain model based on the vehicle networking system model;
s3: constructing a minimum system energy consumption and overhead optimization problem according to a task unloading decision model, a content caching decision model and a blockchain model;
s4: and solving the energy consumption and overhead optimization problem of the minimized system by adopting an A3C algorithm to obtain the task unloading and content caching scheme of the Internet of vehicles.
Preferably, the car networking system model comprises a device layer, an edge layer and a cloud layer; the equipment layer comprises a base station and a road side unit, the edge layer comprises an edge server, and the cloud layer comprises a cloud server; a plurality of road side units are arranged in the coverage area of each base station, and the road side units are used for forwarding the requests of the vehicles to the base stations; each base station is provided with an edge server, the base station is used for storing the content provided by the provider, and the edge server is used for providing computing resources for the vehicle; the cloud server is used for providing computing resources for the blockchain formed by the plurality of base stations and achieving blockchain consensus.
Preferably, the process of constructing the task offloading decision model includes: the task unloading decision of the vehicle comprises the steps that the vehicle locally performs task processing and the vehicle unloads the task to a base station for task processing; respectively calculating the energy consumption of the vehicle for locally performing task processing and the energy consumption of the vehicle for unloading the task to the base station for performing task processing, and calculating the task unloading time delay of the vehicle for unloading the task to the base station for performing task processing; calculating the total energy consumption of task unloading of all vehicles according to the task unloading decision of the vehicles, the energy consumption of the vehicles for locally carrying out task processing and the energy consumption of the vehicles for unloading the tasks to the base station for carrying out task processing; and calculating the total calculation cost of task unloading of all vehicles according to the task unloading decision of the vehicles, the task unloading delay of the vehicles for unloading the tasks to the base station for task processing and the calculation resource unit price of the edge server.
Further, the formulas for calculating the total energy consumption for the task offloading and the total calculation overhead for all vehicles are respectively:
wherein E is n (t) represents the total energy consumption of all vehicles during the task offloading of t time slots, E n1 (t) represents the energy consumption of the nth vehicle in the t time slot for performing task processing locally, x mn (t) task offloading decision for nth vehicle of t time slot, E n2 (t) represents the nth car of the t time slotThe energy consumption of the vehicles for unloading the tasks to the base station for task processing is that N represents the total number of the vehicles, o n (t) represents the total computation overhead of all vehicles,representing computing resource unit price of edge server, T 1 And the time delay of task unloading for the vehicle to unload the task to the base station for task processing is represented.
Preferably, the process of constructing the content caching decision model includes: the content caching decision includes that the content is cached in the base station and that the content is not cached in the base station; after the content is calculated to be cached to the base station, the vehicle receives first transmission energy consumption and first transmission time delay in the content process; setting content preference; calculating second transmission energy consumption and second transmission time delay in the process of receiving the content by the vehicle when the content is not cached to the base station according to the content preference; calculating content caching energy consumption according to the content caching decision, the first transmission energy consumption and the second transmission energy consumption; and calculating content caching cost according to the content caching decision, the first transmission delay and the second transmission delay.
Further, the formulas for calculating the content cache energy consumption and the content cache overhead are respectively as follows:
wherein E is m (t) represents the content buffering energy consumption of t time slots, E m1 (t) represents a first transmission energy consumption for transmitting the mth content consumption, x n (t) represents the content caching decision of the nth vehicle of the t time slot, E m2 (t) represents the second transmission energy consumption for transmitting the mth content, M represents the total number of contents requested by the vehicle, N represents the total number of vehicles, O m (T) represents content caching overhead, T download Indicating the downlink delay for the content to return from the roadside unit to the vehicle,representing the computational resource unit price of the edge server, τ represents the round trip delay of the user request between the base station and the provider.
Preferably, the process of constructing the blockchain model includes: using the base station as a block chain node, adopting a PBFT consensus mechanism to carry out block chain consensus and calculating the total calculation period of the PBFT consensus process; and calculating the consensus energy consumption, the consensus overhead and the consensus delay of the blockchain system according to the total calculation period of the PBFT consensus process.
Further, formulas for calculating the consensus energy consumption, the consensus overhead and the consensus delay of the blockchain system are respectively as follows:
wherein E is c (t) represents the consensus energy consumption, p, of a t-slot blockchain system r Representing the transmission rate between the base station and the provider, d (t) representing the total transaction size in the block in the t time slot, r m,c (t) represents a transmission rate between the cloud computing server and the MEC server, p c Representing the calculation power of the cloud server, U (t) representing the calculation period of the PBFT consensus process, I 2 (t) represents the computing resources of the cloud server in the t time slot, B (t) represents the number of consensus nodes unloaded to the cloud server, q represents the computing power of the edge server, B represents the number of base stations, O c (t) represents a consensus overhead,representing computing resource unit price of edge server, I 1 (t) represents t time slotsComputing resources of the inner edge server, T i (t) represents a block interval time, +.>Representing computing resource unit price of cloud server, T c (T) represents a consensus delay, T b And (t) represents a broadcast delay between nodes.
Preferably, the process of solving the energy consumption and overhead optimization problem of the minimized system by adopting the A3C algorithm comprises the following steps: modeling the energy consumption and cost optimization problem of the minimized system as a Markov decision process and constructing a state space, an action space and a reward function; solving the energy consumption and overhead optimization problem of the minimized system by adopting an A3C algorithm according to the state space, the action space and the rewarding function to obtain a task unloading and content caching scheme of the Internet of vehicles;
the state space is expressed as:
S(t)={F(t),I 1 (t),I 2 (t),ξ(t),φ m (t),x,y,num}
wherein S (t) represents a state space, F (t) represents energy of OBUs on a vehicle after execution of a task, I 1 (t) represents computing resources of the edge server, I 2 (t) represents the computing resource of the cloud server, ζ (t) represents the computing resource unit price of the edge server or the cloud server, φ m (t) represents content popularity, x represents vehicle location abscissa, y represents vehicle location ordinate, num represents the number of edge servers serving the vehicle;
the action space is expressed as:
A(t)={x m (t),x n (t),b(t),T i (t),s(t),l(t)}
wherein A (t) represents an action space, x m (t) represents task offloading decision, x n (T) represents content caching decisions, b (T) represents the number of consensus nodes offloaded to the cloud server, T i (t) represents a block interval time, s (t) represents a size of a maximum block, and l (t) represents a distance between the edge server and the vehicle;
the bonus function is expressed as:
s.t.C1:F(t)≥ρ
C2:T c (t)≤ε×T i (t)
C3:d(t)≤s(t)
wherein R (t) represents a reward for a t slot, w 1 Represents the energy consumption weight, w 2 Representing overhead weight, ψ (T) representing throughput of T slot transactions, η representing weight coefficient of cost of consumption in the system, W (T) representing cost of consumption, V representing vehicle aggregate size, ρ representing minimum energy of on-vehicle OBUs, T c (t) represents the consensus delay of the blockchain system, ε represents the time limit to complete a block, and d (t) represents the total transaction size within a block within a t slot.
The beneficial effects of the invention are as follows: the invention aims at the continuity and the dynamics of the available resources of the mobile vehicle and the computing server in the vehicle-mounted network, and the optimization problem is modeled as a Markov Decision Process (MDP). For the large scale and dynamic characteristics of the system, an asynchronous dominant behavior criticizer (A3C) method is introduced to deal with the optimization problem. Simulation results show that compared with other comparison schemes, the method has the remarkable advantages of reducing the system time delay, reducing the system energy consumption and the calculation cost, improving the service quality of users and simultaneously having lower complexity.
Drawings
FIG. 1 is a schematic diagram of a system model of the Internet of vehicles in the present invention;
FIG. 2 is a schematic diagram of an MDP decision model of a system model of the Internet of vehicles in the present invention;
FIG. 3 is a diagram of a convergence comparison of the present invention with a DDPG-based algorithm;
FIG. 4 is a total prize plot of the present invention versus a comparison scheme;
FIG. 5 is a graph of the number of vehicles versus total energy consumption for the present invention and comparison;
FIG. 6 is a graph of the number of vehicles versus total computational overhead for the present invention and comparative;
FIG. 7 is a graph of block transaction number versus throughput for the present invention and comparison.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a method for unloading tasks and caching contents of a vehicle networking with cooperative blockchain and edge calculation, which comprises the following steps:
s1: and constructing a car networking system model.
As shown in fig. 1, the internet of vehicles system model comprises a device layer, an edge layer and a cloud layer; specific:
device layer: comprising a Base Station (BS) and a Road Side Unit (RSU), which sends calculation requests of different types of mobile vehicles to the base station via wireless communication. And a plurality of RSUs are arranged in the coverage area of each base station, and the RSUs forward the calculation request of the vehicle to the BSs through broadband wireless communication so as to improve the service experience of the user.
Edge layer: including edge servers (MEC servers), each BS is equipped with an edge server, distributed around different suppliers. All the base stations are connected through wired links. One is that the MEC server has the ability to handle the computational tasks undertaken by the vehicle. And secondly, each BS may be used to cache content provided by the provider.
Cloud layer: a cloud server is included for providing more computing resources required by the blockchain formed by the BSs for achieving consensus and recording billing data during the transaction.
At each time slot t, the present invention will v= { V 1 ,V 2 ,……V N The number of vehicles with calculation requests is represented by the number N; will r= { R 1 ,R 2 ……R r Denoted as RSU set, willRepresented as a set of BSs.
S2: and constructing a task unloading decision model, a content caching decision model and a blockchain model based on the vehicle networking system model.
Constructing a task unloading decision model:
both the request process of the vehicle and the consensus process of the blockchain system create a large number of heavy, complex computing tasks that can be performed using the MEC server. In addition, the MEC server allocates different computing resources for the computing tasks, and the corresponding computing overhead is also different. The correspondence is calculation overhead=calculation time×unit price corresponding to the calculation resource.
Since the computing resources of the MEC server are typically required to process different computing tasks simultaneously, it is difficult to accurately know the computing resources allocated to each computing task in each time period. Thus, the computing resources of the MEC server are represented as random variables I 1 ,I 1 Divided into Q resource blocks such as: x= { X 1 ,x 2 ,...,x Q },I 1 (t) represents the computing resources of the MEC server in time slot t, I 1 (t)={i 1 (t),i 2 (t),......i B (t)},i B (t) represents the computing resources of the B-th MEC server. In the system of the present invention, it is assumed that the allocation of computing resources to each computing task is constant over a period of time t, I 1 The transition probability of the state to the next state is (t):
wherein I is 1 (t+1) denotes the computing resources of the MEC server in the t+1 time slot, x denotes the resource block size of the t time slot MEC server,the resource block size of the t+1 slot MEC server is denoted, and X denotes the resource block set.
The MEC server computational resource transition probability matrix can be expressed as:
the matrix is formed by dividing I into a series of specific time intervals 1 (t) a series of processes for converting the state of time slot t to t+1, which can be used to simulate I 1 (t) a dynamic process of transitioning from one state to another.
The calculation task generated by each vehicle can be executed locally and can be unloaded into the BS for calculation, so that the calculation efficiency is improved and the energy consumption is saved; thus, the task offloading decision of the vehicle of the present invention includes the vehicle performing task processing locally and the vehicle offloading tasks to the base station for task processing.
When the vehicle selects to perform task processing locally, the energy consumption in the t time slot is as follows:
E n1 (t)=k(I l ) 2 u n (t)
wherein E is n1 (t) represents the energy consumption of the vehicle n for performing task processing locally in the t time slot; k represents an energy consumption index, and k can take a value of 10 according to actual measurement -27 ,I l Representing the computational power of the vehicle, u n And (t) represents the CPU cycle required to complete the computational task within the t time slot.
When the vehicle chooses to perform task processing locally, the computational tasks are offloaded by the RSU to the BS equipped with the MEC server, which is not taken into account by the present invention, since the transmission rate between the RSU and the BS is very high, and the resulting transmission energy consumption between them is relatively small, which does not affect the decision of computational offloading. Therefore, the invention only considers the transmission energy consumption generated by the vehicle sending the calculation task to the RSU through the wireless uplink channel.
The available uplink transmission rate in time slot t is:
where w represents the channel bandwidth between the vehicle and the RSU, p n Los, the transmission power of the vehicle i For the channel gain of the ith vehicle,is the signal to noise ratio.
Energy consumption E for unloading tasks to base station for task processing by vehicle n2 (t) is expressed as:
wherein D is n (t) represents the task data size, q represents the calculation power of the MEC server.
Computation delay t of MEC server n (t) is:
time delay T for a vehicle to offload tasks to a base station upload Expressed as:
thus, the vehicle offloads the task to the base station for task processing with a task offload latency T 1 Expressed as:
T 1 =T upload +t n (t)
calculating the total energy consumption of task unloading of all vehicles according to the task unloading decision of the vehicles, the energy consumption of the vehicles for locally carrying out task processing and the energy consumption of the vehicles for unloading the tasks to the base station for carrying out task processing, wherein the total energy consumption is expressed as:
wherein E is n (t) represents that all vehicles are in t time slotsIs to offload the total energy consumption of the task E n1 (t) represents the energy consumption of the nth vehicle in the t time slot for performing task processing locally; x is x mn (t) = {0,1} represents the task offloading decision of the nth vehicle of the t slot, x mn (t) =0 denotes that the computational task is handled locally, x mn (t) =1 means that the calculation task is offloaded to the MEC server; e (E) n2 And (t) represents the energy consumption of the t-slot vehicle for unloading the task to the base station for task processing.
Calculating the total cost of task offloading calculation of all vehicles according to the task offloading decision of the vehicles, the task offloading delay of the vehicles offloading the tasks to the base station for task processing and the calculation resource unit price of the edge server, wherein the total cost is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing computing resource unit price of edge server, o n (T) represents total calculation cost of all vehicles, T 1 And the time delay of task unloading for the vehicle to unload the task to the base station for task processing is represented.
Constructing a content caching decision model:
the buffering capacity of the BSs around the distributed vehicle network is limited, which is able to fully store K popular content provided by the provider. The provider provides the BSs with what the vehicle requires, and RSUs within range of the BSs then forward them to the vehicle. Assuming that the content size of each BSs buffer is S bytes, the total capacity of the buffer is k·s bits. The maximum number of BSs cacheable content is always K, and when the provider sends new content to BSs for caching, the old content is deleted. The provider determines whether to cache the content in the BSs based on the task content requested from the vehicle. The content delivery request set for all vehicles is expressed asM represents the total number of requested content.
The content caching decision includes that the content is cached in the base station and that the content is not cached in the base station; the split case is described as follows:
when the content has been cached in the base station, i.e. the content m has been uploaded by the provider into BS b, the content m will be transmitted from BS b to the vehicle v and of size S bits, the energy consumption from BS b to RSU r being relatively small and constant, so that the invention can ignore it. Therefore, the present invention only considers the transmission procedure of the content m from the RSU r to the vehicle v through the wireless downlink channel. At time slot t, the available downlink transmission rate r m,v (t) is:
where w represents the channel bandwidth between the vehicle and the RSU, p m Representing the transmission power of RSU, los i Indicating the gain of the channel,is the signal to noise ratio.
The first transmission energy consumption in the process of receiving the content by the vehicle is expressed as:
the first transmission delay in the process of the vehicle receiving the content, i.e. the downlink delay from the RSU to the vehicle, is expressed as:
when the content is not cached to the base station, i.e., the content m is not stored in BS b, BS b will acquire the requested content from the provider and then transmit the content m to the vehicle v. The BSb assumes τ as the round trip time for the received user request to be sent from BS b to the provider and for the content to be returned to BS b, the second propagation delay in the vehicle's receipt of the content is expressed as:
T 2 =T download
the network caching performance and the satisfaction degree of the user on the data request can be improved through the preference degree of the content. One content, when ready or stored in the base station, can be used by other requests nearby, thereby reducing the cost of backhaul link transmission. The preference of content m in the provider is denoted herein as phi m ∈[0,1]A relation is established between the preference degree of the content and the energy consumption, the content with higher preference degree is more easily required by a user, and the second transmission energy consumption E in the process of receiving the content by the vehicle m2 (t) is expressed as:
wherein p is r Representing the transmission power between the base station and the provider.
Calculating content caching energy consumption according to the content caching decision, the first transmission energy consumption and the second transmission energy consumption, wherein the content caching energy consumption is expressed as follows:
wherein E is m (t) represents the content buffering energy consumption of t time slots, E m1 (t) first transmission energy consumption representing the mth content consumption of transmission, E m2 (t) represents a second transmission energy consumption for transmitting the mth content, M represents a total number of content requested by the vehicle, and N represents a total number of vehicles; x is x n (t) = {0,1} represents the content caching decision of the nth vehicle of the t time slot, x n (t) =0 indicates that the content has been cached in the base station in advance, x n (t) =1 indicates that the content is not buffered in the base station.
Calculating content buffering overhead according to the content buffering decision, the first transmission delay and the second transmission delay, wherein the content buffering overhead is expressed as follows:
wherein O is m (T) represents content caching overhead, T download Representing the downlink delay for the content to return from the roadside unit to the vehicle, τ represents the round trip delay for the user request between the base station and the provider.
Building a blockchain model:
the base station is used as a block chain node, a PBFT consensus mechanism is adopted to carry out block chain consensus and calculate the total calculation period of the PBFT consensus process, and the process is as follows:
(1) Request for
During time period t, the providers upload content billing bills to all BSs, and each provider broadcasts the collected transaction information to the entire network. Completing the transaction list in the transaction pool of all the consensus nodes, the blockchain system randomly allocates the master nodes and according to the packing timeout time (block interval time) T i (t) and maximum block size s (t) packing the transaction data into new blocks in order of reception time. The signature of the transaction data and MACs then need to be verified by the master node. The calculation period at this stage can be expressed as:
where d (t) is the total transaction size within the block in the t slot, σ (t) is the average size of the transaction, and the CPU cycles of α and β generate or verify the signature and Message Authentication Code (MAC), respectively.
(2) Prepreparation
After all transaction verification is completed, the master node will discard the error transaction collected by the BSs, generate an independent signature and B-1 MACs, and transmit to each replica node along with the new block. If the duplicate node receives this new block, then the signature and MACs for that block, and each correct transaction, will be verified in turn. And assuming that the correct transaction remaining after verification by the master node in the request phase is part of all transactions g. Thus, the computation cycles of the master node and the replica node at this stage are respectively expressed as:
u p2 (t)=α+(B-1)β
(3) Preparation of
If the new block and transaction have been validated, the duplicate node will generate a signature and B-1 MACs and send them to all other nodes. Thereafter, each node needs to receive and verify at least 2f (f is the number of erroneous nodes, where f= (B-1)/3) signatures and MACs. Thus, for the master node and the replica node, the computation cycles are expressed as:
u p3 (t)=2f(α+β)
u r3 (t)=α+(B-1)β+2f(α+β)
(4) Submission
If the authenticated nodes receive 2 correct messages, they will send a signature and B-1 MACs to the other people. At the same time, each node needs to verify at least 2f signatures and MACs. Thus, for both the master node and the replica node, the computation period can be expressed as:
u 4 (t)=α+(B-1)β+2f(α+β)
(5) Reply to
After the authenticated nodes collect 2f valid commit messages, they send to the master node a message includingReply messages including individual signatures and MACs, which requires verification of at least 2f signatures and MACs. Thus, for the master node and the replica node, the computation cycles are expressed as:
wherein U (t) represents the total computation period of the PBFT consensus process in t time slots, f represents an intermediate parameter, g represents the proportion of correct transactions remaining after verification by the master node in the request phase to all transactions, and B represents the number of base stations.
In the course of the consensus process, the user may have to make a consensus,all nodes need to verify a large number of signatures and MACs, which can create complex computing tasks that require more computing resources to complete. Thus, the node may select a MEC server or cloud to perform the computing task according to its own computing power requirements. When the cloud processing computing task is selected, because the cloud can process different computing tasks at the same time, it is impossible to determine the computing resources of the server in each time period. Thus use I 2 Representing computing resources of a cloud computing server, where a Markov chain is used to construct I 2 Is a state change of (c). I 2 Divided into M resource blocks such as: y= { Y 1 ,y 2 ,......y M },I 2 (t) represents the computing resources of the cloud server in time slot t. In the system of the present invention, assuming that the allocation of computing resources to each computing task is constant during time slot t, the transition probability can be expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device, the cloud server resource block size of the t+1 time slot is represented, Y represents the cloud server resource block size of the t time slot, and Y represents the resource block set.
The transition probability matrix of cloud server computing resources can be expressed as:
calculating the consensus energy consumption, the consensus overhead and the consensus delay of the blockchain system according to the total calculation period of the PBFT consensus process, and specifically:
when B (t) epsilon {0,1, …, the node B } selects the cloud server to execute the computing task, and the other remaining nodes select the MEC server to execute the computing task, the common energy consumption of the blockchain system is expressed as:
the common overhead of a blockchain system is expressed as:
the consensus latency of a blockchain system is expressed as:
wherein E is c (t) represents the consensus energy consumption of the t-slot blockchain system, r m,c (t) represents a transmission rate between the cloud computing server and the MEC server, p c Representing the computing power of the cloud server, b (t) representing the number of nodes selecting the cloud server to perform the computing task, O c (t) represents a consensus overhead,representing computing resource unit price of edge server, T i (t) represents a block interval time, +.>Representing computing resource unit price of cloud server, T c (T) represents a consensus delay, T b And (t) represents a broadcast delay between nodes.
The throughput of the transaction is:
assuming that γ is the radius at which the MEC can provide the best quality of service, l (t) is the distance between the MEC server providing the service at that time and the vehicle user, the vehicle is traveling at a speed v, the time at which the best quality of service can be provided:
T′≤T max
where γ is the radius where the MEC can provide the best quality of service.
S3: and constructing a minimum system energy consumption and overhead optimization problem according to the task unloading decision model, the content caching decision model and the blockchain model.
1) Energy consumption (Energy Consumption, EC). The energy consumption of the system comprises three aspects of task unloading, content caching and node consensus process. The definition is as follows:
E(t)=E n (t)+E m (t)+E c (t)
2) Computational overhead (Computation Overheads, CO). The computing overhead of the system also comprises three aspects of task unloading, content caching and consensus node process. Can be expressed as follows:
O(t)=O n (t)+O m (t)+O c (t)
3) System Delay (TD). The time delay of the system includes the time delay of task unloading and content caching (the invention only considers the time delay problem in the transmission process). The time delay at this time is defined as follows:
T=T 1 +T 2
wherein T is 1 T is time delay in task unloading process 2 Time delay for content caching.
The desire of the present invention is to minimize energy consumption and overhead by taking optimal offloading and buffering decisions in each time period; with A (x) m (t),x n (t),b(t),T i (t), s (t), l (t)) represent the energy consumption and the cost of the system, and the minimization of the energy consumption and the cost optimization problem of the system can be expressed as follows:
s.t.x m (t)∈{0,1}
x n (t)∈{0,1}
b(t)∈{0,1,......B}
T i (t)∈{0.2,0.5,......I}
s(t)∈{1,2,......s}
preferably, the minimization of system energy consumption and overhead optimization problems are:
min{E(t),O(t),T}
s.t.E(t)=E n (t)+E m (t)+E c (t)
O(t)=O n (t)+O m (t)+O c (t)
T=T 1 +T 2
s4: and solving the energy consumption and overhead optimization problem of the minimized system by adopting an A3C algorithm to obtain the task unloading and content caching scheme of the Internet of vehicles.
The on-board edge system is a dynamically time-varying network that makes it difficult to determine strategies for task offloading and content caching using simple rules. Moreover, frequent topology changes also require dynamic adjustment of task data. Facing the dynamic and complex problems; the present invention uses the A3C method in reinforcement learning to address optimization problems. The A3C algorithm is known as an asynchronous dominance critter. And actors and critics are put in a plurality of threads to perform synchronous training, so that computer resources can be effectively utilized, and the training effect is improved. Briefly, each core of a server is a thread, which is a parallel world. The same program, running in parallel world at the same time, can increase the running speed several times. And feeding back the running result of each thread to the main network, and simultaneously obtaining the latest parameter update from the main network. This combines multiple threads together, further reducing the relevance of events, which facilitates convergence of the program.
Reinforcement learning relies on four core elements S, A, P and R, where S is the state set, a is the action set, P is the state transition probability, and R is the reward function. The joint optimization of task offloading and content caching of on-board edge computing platforms is considered a markov decision process (Markov Decision Process, MDP) problem. Although the optimal state value can be calculated by using the MDP, this method is not suitable for the large-scale internet of vehicles due to the large calculation amount. Therefore, the invention adopts an A3C algorithm to find the optimal behavior selection strategy of the limited MDP.
As shown in FIG. 2, the invention adopts distributed learning to model MDP for the problem of the Internet of vehicles based on the idea of reinforcement learning algorithm, and the state space S (t) epsilon S, S (t) = { F (t), I 1 (t),I 2 (t),ξ(t),φ m (t),x,y,num}。
(1)F(t)={f 1 (t),f 2 (t),......f v (t) }: energy from on-board OBUs of the vehicle after performing or offloading the computing task.
(2)I 1 (t)={i 1 (t),i 2 (t),......i B (t) }: the computing resources of the MEC server decide to allocate the computing resources to each computing task based on its transition probability matrix.
(3)I 2 (t): and the computing resources of the cloud server are determined to be distributed to each computing task according to the transition probability matrix of the computing resources.
(4)ξ(t)={ξ i1 (t),ξ i2 (t),......ξ iB (t),ξ I2 (t) }: and B MEC servers and a unit price corresponding to the computing resources of one cloud server. The unit price is proportional to the value of the computing resource.
(5)φ m (t): popularity of content, probability that a provider caches content is proportional to its probability.
(6) x, y vehicle user location coordinates, number of MECs served near num vehicle.
Since each variable is continuous, the probability of transitioning to the next state S (t+1) after the agent performs one action a (t) ∈a in state S (t) at a pending state with a probability of 0 can be expressed as:
where f is a state transition probability density function.
Action space:
the packets transmitted in the network are treated as one agent, and the knowledge learned by the agent is stored in the corresponding drone. The method mainly focuses on the unloading decision of the vehicle computing task, the content caching decision, the number of consensus nodes unloaded to a cloud server, the block interval time, the maximum block size and the distance between a service-providing MEC server and a vehicle user, and the energy consumption and the cost of the system are reflected, and the action space is expressed as follows:
A(t)={x m (t),x n (t),b(t),T i (t),s(t),l(t)}
bonus function:
s.t.C1:F(t)≥ρ
C2:T c (t)≤ε×T i (t)
C3:d(t)≤s(t)
W(t)=w 1 E(t)+w 2 o (t): the weighted consumption cost represents the energy consumption of the time slot t and the weighted value of the calculation overhead. w (w) 1 ,w 2 ∈[0,1],w 1 +w 2 =1 is a weighting factor of E (t), O (t), η represents a weighting coefficient of the cost of consumption in the system, and V represents the vehicle set size, respectively.
ρ in constraint C1 represents the minimum energy of on-board OBUs. C2 represents the time limit for completing a block, where epsilon represents the product coefficient, epsilon > 1. C3 represents the limit on the total transaction size in one consensus process.
To penalize the reward, it can be expressed as:
where λ represents a penalty coefficient, P, Z represent a content set that is frequently requested by a user over a period of time, and the content set is cached based on content popularity.
The A3C algorithm is network-based and combines a value-based approach with a policy-based approach. It can solve not only the scene that the action space is discrete, but also the continuous scene. The algorithm needs to approach the strategy function and the cost function respectively; policy function:
π θ (S t ,a t )≈π(a t |S t ;θ)
cost function:
V π (S t )≈V(S t ;θ V )
Q π (S t ,a t )≈Q(S t ,a t ;θ V )
wherein θ represents an action network parameter, θ V Representing the evaluation network parameters.
The agent is based on a policy function pi (a t |S t The method comprises the steps of carrying out a first treatment on the surface of the θ), at the current state S t Lower generation action a t Transition to the next state S with probability of interaction with the environment t+1 Generating instant rewards r t The cumulative return at step t is:
wherein, gamma is {0,1}]Is an attenuation factor; k represents a kth state; r is (r) t+i Is an instant rewards; by state change, it passes t max There is an upper limit.
In order to significantly reduce variance in gradient calculations, the present invention considers and employs a dominance estimation, which estimates the dominance function as:
A(S t ,a t )=Q(S t ,a t )-V(S t )
A(S t ,a t ;θ,θ V )=Q(S t ,a t ;θ V )-V(S t ;θ V )
=R tV )-V(S t ;θ V )
wherein R is t Is true rewarding, V (S t ) Is an estimated state, so that the dominance function can evaluate action a t When the action taken is higher than the average, the dominance function a (S t ,a t ) Is positive; when the action taken is lower than the average, the dominance function a (S t ,a t ) Is negative.
From the dominance function, the policy loss function is defined as:
L π (θ)=logπ(a t |S t ;θ)A(S t ,a t ;θ,θ V )+βH(π(S t ;θ))
wherein β represents a hyper-parameter controlling the intensity of the entropy regularization term; h (pi (S) t The method comprises the steps of carrying out a first treatment on the surface of the θ) is the entropy of the policy pi, which prevents premature entry into sub-optimal policies.
The cost function is the loss function:
L VV )=(R t -V(S t ;θ V )) 2
the iterative formulas of the actor and the commentator are respectively as follows: taking the derivative
The two loss functions are minimized and parameters of the actor and reviewer are updated according to the accumulated gradients. The estimated gradient under RMSProp conditions is:
g=α·g+(1-α)Δθ 2
where α is the momentum, g represents the estimated gradient, Δθ is the cumulative gradient of the cost loss function, and the network parameters are updated according to the gradient estimates:
where η is the learning rate and e is a small positive number for avoiding errors when the denominator is 0.
The energy consumption and the cost optimization problem of the minimized system are solved by adopting the A3C algorithm, an optimal vehicle networking task unloading and content caching scheme is obtained, and the energy consumption and the calculation cost of the system can be effectively reduced by adopting an optimal unloading decision and a content caching decision to jointly optimize each performance index.
The invention was evaluated:
fig. 3 shows the convergence of the DDPG based scheme and the proposed scheme. It can be seen that the proposed solution of the present invention achieves a higher overall return and a fast and smooth convergence can be achieved. Since the empirical replay mechanism employed by DDPG requires agents to learn using non-policy methods, which only update the data generated by the old policy, q values may be overestimated and optimal actions may not be found. In contrast, the solution based on A3C proposed by the present invention uses multiple thread functions of the CPU to execute multiple agents in parallel and asynchronously, instead of experiencing a replay mechanism, so that an optimal policy can be found quickly. Therefore, the invention has better performance.
Fig. 4 shows the long term benefits of the different schemes. As can be seen from the figure, the long-term benefit decreases with increasing time (the long-term benefit of the invention is defined as unloading cost-buffering cost, treated as negative), while the invention maintains a higher overall benefit.
Figures 5 and 6 show the effect of the number of vehicles on the total energy consumption for different scenarios. As the number of vehicles increases, so does the total energy consumption. Furthermore, the invention always achieves lower energy consumption. This is because the optimal strategy for A3C training can dynamically adjust the offloading decisions and the caching decisions. The unloading decision of the calculation task born by the vehicle can improve the calculation capability of the moving vehicle and reduce the calculation energy consumption. Meanwhile, the caching decision can enable the content to be closer to the vehicle, and energy consumption of the vehicle transmission by the provider is reduced.
Figure 7 shows the effect of the average transaction number size of a tile on the throughput of different schemes. As can be seen from the figure, the throughput gradually decreases as the size of the transaction number increases. Because the maximum block size is fixed, as the transaction number size becomes larger, fewer transactions can be accommodated by the generated block. In addition, by selecting the appropriate block spacing and block size, the present invention achieves the highest transaction throughput.
While the foregoing is directed to embodiments, aspects and advantages of the present invention, other and further details of the invention may be had by the foregoing description, it will be understood that the foregoing embodiments are merely exemplary of the invention, and that any changes, substitutions, alterations, etc. which may be made herein without departing from the spirit and principles of the invention.

Claims (9)

1. The method for unloading the internet of vehicles tasks and caching the content by combining the blockchain with the edge calculation is characterized by comprising the following steps of:
s1: constructing a vehicle networking system model;
s2: constructing a task unloading decision model, a content caching decision model and a blockchain model based on the vehicle networking system model;
s3: constructing a minimum system energy consumption and overhead optimization problem according to a task unloading decision model, a content caching decision model and a blockchain model;
s4: and solving the energy consumption and overhead optimization problem of the minimized system by adopting an A3C algorithm to obtain the task unloading and content caching scheme of the Internet of vehicles.
2. The method for unloading and caching content of a vehicle networking task with coordinated blockchain and edge computation according to claim 1, wherein the vehicle networking system model comprises a device layer, an edge layer and a cloud layer; the equipment layer comprises a base station and a road side unit, the edge layer comprises an edge server, and the cloud layer comprises a cloud server; a plurality of road side units are arranged in the coverage area of each base station, and the road side units are used for forwarding the requests of the vehicles to the base stations; each base station is provided with an edge server, the base station is used for storing the content provided by the provider, and the edge server is used for providing computing resources for the vehicle; the cloud server is used for providing computing resources for the blockchain formed by the plurality of base stations and achieving blockchain consensus.
3. The method for task offloading and content caching of a vehicle networking with co-operation of blockchain and edge computing of claim 1, wherein the process of constructing a task offloading decision model comprises: the task unloading decision of the vehicle comprises the steps that the vehicle locally performs task processing and the vehicle unloads the task to a base station for task processing; respectively calculating the energy consumption of the vehicle for locally performing task processing and the energy consumption of the vehicle for unloading the task to the base station for performing task processing, and calculating the task unloading time delay of the vehicle for unloading the task to the base station for performing task processing; calculating the total energy consumption of task unloading of all vehicles according to the task unloading decision of the vehicles, the energy consumption of the vehicles for locally carrying out task processing and the energy consumption of the vehicles for unloading the tasks to the base station for carrying out task processing; and calculating the total calculation cost of task unloading of all vehicles according to the task unloading decision of the vehicles, the task unloading delay of the vehicles for unloading the tasks to the base station for task processing and the calculation resource unit price of the edge server.
4. A blockchain and edge computing collaborative internet of vehicles task offloading and content caching method as in claim 3, wherein the formulas for computing total energy consumption for task offloading and total computation overhead for all vehicles are:
wherein E is n (t) represents the total energy consumption of all vehicles during the task offloading of t time slots, E n1 (t) represents the energy consumption of the nth vehicle in the t time slot for performing task processing locally, x mn (t) task offloading decision for nth vehicle of t time slot, E n2 (t) represents the energy consumption of the nth vehicle in the t time slot for unloading the task to the base station for task processing, N represents the total number of vehicles, o n (t) represents total computation cost of all vehicles, ζ I1(t) Representing computing resource unit price of edge server, T 1 And the time delay of task unloading for the vehicle to unload the task to the base station for task processing is represented.
5. The method for task offloading and content caching for a vehicle networking with co-operation of blockchain and edge computing of claim 1, wherein the process of constructing the content caching decision model comprises: the content caching decision includes that the content is cached in the base station and that the content is not cached in the base station; after the content is calculated to be cached to the base station, the vehicle receives first transmission energy consumption and first transmission time delay in the content process; setting content preference; calculating second transmission energy consumption and second transmission time delay in the process of receiving the content by the vehicle when the content is not cached to the base station according to the content preference; calculating content caching energy consumption according to the content caching decision, the first transmission energy consumption and the second transmission energy consumption; and calculating content caching cost according to the content caching decision, the first transmission delay and the second transmission delay.
6. The method for unloading and caching content for the internet of vehicles task with coordinated blockchain and edge computation according to claim 5, wherein the formulas for computing the energy consumption and the cost of caching content are respectively:
wherein E is m (t) represents the content buffering energy consumption of t time slots, E m1 (t) represents a first transmission energy consumption for transmitting the mth content consumption, x n (t) represents the content caching decision of the nth vehicle, E m2 (t) represents the second transmission energy consumption for transmitting the mth content, M represents the total number of contents requested by the vehicle, N represents the total number of vehicles, O m (T) represents content caching overhead, T download Indicating the downlink delay in returning content from the road side unit to the vehicle, ζ I1(t) Representing the computational resource unit price of the edge server, τ represents the round trip delay of the user request between the base station and the provider.
7. The method for internet of vehicles task offloading and content caching with blockchain and edge computing coordination of claim 1, wherein the process of building a blockchain model comprises: using the base station as a block chain node, adopting a PBFT consensus mechanism to carry out block chain consensus and calculating the total calculation period of the PBFT consensus process; and calculating the consensus energy consumption, the consensus overhead and the consensus delay of the blockchain system according to the total calculation period of the PBFT consensus process.
8. The method for unloading and caching contents of tasks of the internet of vehicles with coordinated blockchain and edge calculation according to claim 7, wherein formulas for calculating the consensus energy consumption, the consensus overhead and the consensus delay of the blockchain system are respectively:
wherein E is c (t) represents the consensus energy consumption, p, of a t-slot blockchain system r Representing the transmission rate between the base station and the provider, d (t) representing the total transaction size in the block in the t time slot, r m,c (t) represents a transmission rate between the cloud computing server and the MEC server, p c Representing the calculation power of the cloud server, U (t) representing the calculation period of the PBFT consensus process, I 2 (t) represents the computing resources of the cloud server in the t time slot, B (t) represents the number of consensus nodes unloaded to the cloud server, q represents the computing power of the edge server, B represents the number of base stations, O c (t) represents consensus overhead, ζ I1(t) Representing computing resource unit price of edge server, I 1 (T) represents the computing resources of the edge server in the T time slot, T i (t) represents block interval time, ζ I2(t) Representing computing resource unit price of cloud server, T c (T) represents a consensus delay, T b And (t) represents a broadcast delay between nodes.
9. The method for task offloading and content caching of a vehicle networking with co-operation of blockchain and edge computation according to claim 1, wherein the process of solving the minimization of the system energy consumption and overhead optimization problem by adopting an A3C algorithm comprises: modeling the energy consumption and cost optimization problem of the minimized system as a Markov decision process and constructing a state space, an action space and a reward function; solving the energy consumption and overhead optimization problem of the minimized system by adopting an A3C algorithm according to the state space, the action space and the rewarding function to obtain a task unloading and content caching scheme of the Internet of vehicles;
the state space is expressed as:
S(t)={F(t),I 1 (t),I 2 (t),ξ(t),φn m (t),x,y,num}
wherein S (t) represents a state space, F (t) represents energy of OBUs on a vehicle after execution of a task, I 1 (t) represents computing resources of the edge server, I 2 (t) represents the computing resource of the cloud server, ζ (t) represents the computing resource unit price of the edge server or the cloud server, φ m (t) represents a content streamThe degree of travel, x represents the vehicle position abscissa, y represents the vehicle position ordinate, and num represents the number of edge servers serving the vehicle;
the action space is expressed as:
A(t)={x m (t),x n (t),b(t),T i (t),s(t),l(t)}
wherein A (t) represents an action space, x m (t) represents task offloading decision, x n (T) represents content caching decisions, b (T) represents the number of consensus nodes offloaded to the cloud server, T i (t) represents a block interval time, s (t) represents a size of a maximum block, and l (t) represents a distance between the edge server and the vehicle;
the bonus function is expressed as:
s.t.C1:F(t)≥ρ
C2:T c (t)≤ε×T i (t)
C3:d(t)≤s(t)
wherein R (t) represents a reward for a t slot, w 1 Represents the energy consumption weight, w 2 Representing overhead weight, ψ (T) representing throughput of T slot transactions, η representing weight coefficient of cost of consumption in the system, W (T) representing cost of consumption, V representing vehicle aggregate size, ρ representing minimum energy of on-vehicle OBUs, T c (t) represents the consensus delay of the blockchain system, ε represents the time limit to complete a block, and d (t) represents the total transaction size within a block within a t slot.
CN202310700813.0A 2023-06-14 2023-06-14 Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation Pending CN116566838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310700813.0A CN116566838A (en) 2023-06-14 2023-06-14 Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310700813.0A CN116566838A (en) 2023-06-14 2023-06-14 Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation

Publications (1)

Publication Number Publication Date
CN116566838A true CN116566838A (en) 2023-08-08

Family

ID=87489999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310700813.0A Pending CN116566838A (en) 2023-06-14 2023-06-14 Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation

Country Status (1)

Country Link
CN (1) CN116566838A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761152A (en) * 2023-08-14 2023-09-15 合肥工业大学 Roadside unit edge cache placement and content delivery method
CN116828226A (en) * 2023-08-28 2023-09-29 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain
CN117251296A (en) * 2023-11-15 2023-12-19 成都信息工程大学 Mobile edge computing task unloading method with caching mechanism

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761152A (en) * 2023-08-14 2023-09-15 合肥工业大学 Roadside unit edge cache placement and content delivery method
CN116761152B (en) * 2023-08-14 2023-11-03 合肥工业大学 Roadside unit edge cache placement and content delivery method
CN116828226A (en) * 2023-08-28 2023-09-29 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain
CN116828226B (en) * 2023-08-28 2023-11-10 南京邮电大学 Cloud edge end collaborative video stream caching system based on block chain
CN117251296A (en) * 2023-11-15 2023-12-19 成都信息工程大学 Mobile edge computing task unloading method with caching mechanism
CN117251296B (en) * 2023-11-15 2024-03-12 成都信息工程大学 Mobile edge computing task unloading method with caching mechanism

Similar Documents

Publication Publication Date Title
CN109391681B (en) MEC-based V2X mobility prediction and content caching offloading scheme
Liu et al. Cooperative offloading and resource management for UAV-enabled mobile edge computing in power IoT system
CN112202928B (en) Credible unloading cooperative node selection system and method for sensing edge cloud block chain network
Li et al. NOMA-enabled cooperative computation offloading for blockchain-empowered Internet of Things: A learning approach
CN116566838A (en) Internet of vehicles task unloading and content caching method with cooperative blockchain and edge calculation
CN111010684B (en) Internet of vehicles resource allocation method based on MEC cache service
CN112070240A (en) Layered federal learning framework for efficient communication and optimization method and system thereof
CN114143346B (en) Joint optimization method and system for task unloading and service caching of Internet of vehicles
CN113344255B (en) Vehicle-mounted network application data transmission and charging optimization method based on mobile edge calculation and block chain
CN111132074A (en) Multi-access edge computing unloading and frame time slot resource allocation method in Internet of vehicles environment
Lan et al. Deep reinforcement learning for computation offloading and caching in fog-based vehicular networks
CN112115505A (en) New energy automobile charging station charging data transmission method based on mobile edge calculation and block chain technology
CN112540845A (en) Mobile edge calculation-based collaboration system and method
Ye et al. Collaborative and intelligent resource optimization for computing and caching in IoV with blockchain and MEC using A3C approach
CN115034390A (en) Deep learning model reasoning acceleration method based on cloud edge-side cooperation
Li et al. Joint optimization of computation cost and delay for task offloading in vehicular fog networks
CN116030623A (en) Collaborative path planning and scheduling method based on blockchain in cognitive Internet of vehicles scene
Ma et al. Deep reinforcement learning for pre-caching and task allocation in internet of vehicles
Zhang et al. User scheduling and task offloading in multi-tier computing 6G vehicular network
Xing et al. Deep reinforcement learning for cooperative edge caching in vehicular networks
Ye et al. Mec and blockchain-enabled energy-efficient internet of vehicles based on a3c approach
CN117202265A (en) DQN-based service migration method in edge environment
CN115277567B (en) Intelligent reflecting surface-assisted Internet of vehicles multi-MEC unloading method
CN116137724A (en) Task unloading and resource allocation method based on mobile edge calculation
Qi et al. Research on an intelligent computing offloading model for the internet of vehicles based on blockchain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination