CN117891613A - Computing unloading and resource allocation method and system based on Internet of vehicles - Google Patents

Computing unloading and resource allocation method and system based on Internet of vehicles Download PDF

Info

Publication number
CN117891613A
CN117891613A CN202410229713.9A CN202410229713A CN117891613A CN 117891613 A CN117891613 A CN 117891613A CN 202410229713 A CN202410229713 A CN 202410229713A CN 117891613 A CN117891613 A CN 117891613A
Authority
CN
China
Prior art keywords
iteration
resource allocation
unloading
strategy
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410229713.9A
Other languages
Chinese (zh)
Other versions
CN117891613B (en
Inventor
曹敦
王毓斌
杨逸帆
易国栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangjiang Laboratory
Original Assignee
Xiangjiang Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangjiang Laboratory filed Critical Xiangjiang Laboratory
Priority to CN202410229713.9A priority Critical patent/CN117891613B/en
Priority claimed from CN202410229713.9A external-priority patent/CN117891613B/en
Publication of CN117891613A publication Critical patent/CN117891613A/en
Application granted granted Critical
Publication of CN117891613B publication Critical patent/CN117891613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a calculation unloading and resource allocation method and a system based on the Internet of vehicles, wherein the method considers the unloading strategy of a vehicle and the resource allocation strategy of an edge server to be optimized simultaneously, calculates the unloading strategy of each round by taking the task completion delay of a minimum heterogeneous task as a target under the guidance of a cooperative game theory, calculates the resource allocation strategy of each round by taking the task completion delay of the minimum heterogeneous task as the target according to a DDPG (DEEP DETERMINISTIC Policy Gradient) method, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of the heterogeneous task generated by completing a plurality of vehicles simultaneously, thereby improving the overall performance of the vehicle edge calculation system.

Description

Computing unloading and resource allocation method and system based on Internet of vehicles
Technical Field
The embodiment of the application relates to the technical field of Internet of vehicles, in particular to a method and a system for computing unloading and resource allocation based on Internet of vehicles.
Background
Advances in Artificial Intelligence (AI) and internet of vehicles (IOV) technology have played a vital role in the rapid development of vehicle applications in the areas of augmented reality (Augmented Reality, AR), virtual Reality (VR), autopilot, multimedia and video analytics, etc., which can improve the driving experience of drivers and passengers. However, correspondingly, new applications are increasingly demanding in terms of low latency communication, computation, and buffering resources. Although cloud computing network architectures may provide computing and caching resources for vehicles over a centralized network, low latency requirements for vehicle network applications cannot be met due to the long communication links. The on-board edge computing (VEC) is one of mobile edge computing (Mobile Edge Computing, MEC), and as a distributed network architecture, the content and functions of the centralized network can be distributed to the edge side, shortening the transmission path between the vehicle and the server, and thus reducing the processing delay of the user. Thus, the VEC can act as an efficient framework for handling vehicle requests under the vehicle network.
The calculation unloading refers to that a vehicle terminal generates a task, the task is unloaded to other equipment for processing, and then a calculation result is returned. The computing resource allocation refers to the problem that the server needs to process multiple tasks in parallel, because multiple tasks may be offloaded to the same server, which involves CPU frequency allocation. When only a single vehicle initiates a task, an unloading scheme is easier to calculate, but when a plurality of vehicles initiate heterogeneous requests, how to design an excellent calculation unloading strategy and a calculation resource allocation strategy to optimize the overall delay of the system is a technical problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the invention mainly aims to provide a calculation unloading and resource allocation method and system based on the Internet of vehicles, which can furthest reduce the delay of completing heterogeneous tasks generated by a plurality of vehicles at the same time and improve the overall performance of a vehicle edge calculation system.
To achieve the above object, a first aspect of the embodiments of the present invention provides a method for computing and offloading and resource allocation based on the internet of vehicles, where the method for computing and offloading and resource allocation based on the internet of vehicles includes:
If a plurality of vehicles in the vehicle-mounted edge computing system request task unloading, setting an unloading strategy of the vehicle and an iteration ending condition of a resource allocation strategy of an edge server; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
Executing an iteration process until an unloading strategy of the current round of iteration and a resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
Obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
According to the method, the unloading strategy of each round is calculated by taking the task completion delay of the minimum heterogeneous task as a target under the guidance of a cooperative game theory, DDPG (DEEP DETERMINISTIC Policy Gradient) method is used for calculating the resource allocation strategy of each round by taking the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of the heterogeneous task generated by a plurality of vehicles at the same time, so that the overall performance of the vehicle edge computing system is improved.
In some embodiments, the obtaining the unloading strategy of the current round iteration according to the preset unloading method includes:
Determining available unloading objects of each vehicle according to the resource allocation strategy of the previous iteration and the system state of the vehicle-mounted edge computing system;
Recursively generating unloading object sets of the vehicles from available unloading objects of each vehicle, and calculating the unloading object set with the biggest profit as an unloading strategy of the current round iteration; the benefit is related to a task completion delay generated by the plurality of vehicles selecting corresponding offload objects, respectively.
In some embodiments, before recursively generating the set of offload objects for the vehicle, the internet of vehicles-based computing offload and resource allocation method further comprises:
Calculating a first benefit obtained when a first unloading object is selected by a first vehicle; the first vehicle is any one of the plurality of vehicles, the first unloading object is any one of all available unloading objects of the first vehicle, and the first benefit is a benefit of the plurality of vehicles when the first vehicle selects the first unloading object;
if the first benefit is less than the second benefit, rejecting a policy that the first vehicle selects the first offload object; the second benefit is a benefit of the plurality of vehicles when the first vehicle selects any one of the available offload objects other than the first offload object.
In some embodiments, the benefit is represented by the following function:
Wherein represents the task completion delay of vehicle/> initiating a task at time/> .
In some embodiments, the state space in DDPG includes:
Where denotes the size of the task,/> denotes the computational complexity of the task, and/> denotes the computational resources remaining by the edge server.
In some embodiments, the bonus space in DDPG includes:
Where denotes the task completion delay under average allocation,/> denotes the task completion delay for computing tasks according to DDPG,/> denotes the computing resources allocated to tasks by the edge server in state/> , and/> denotes the computing resources bisected by the edge server to tasks.
In some embodiments, the iteration end condition further comprises a maximum number of iterations;
and if the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration comprises:
If the difference between the task completion delay of the unloading strategy and the resource allocation strategy of the current iteration and the task completion delay of the unloading strategy and the resource allocation strategy of the previous iteration is smaller than a preset value, or if the current iteration number reaches the maximum iteration number, the iteration is ended.
To achieve the above object, a second aspect of the embodiments of the present invention provides a computing offloading and resource allocation system based on the internet of vehicles, including:
A request acquisition unit, configured to set an unloading policy of a vehicle and an iteration end condition of a resource allocation policy of an edge server if a plurality of vehicles in a vehicle-mounted edge computing system request task unloading; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
the iteration calculation unit is used for executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
Obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
To achieve the above object, a third aspect of the embodiments of the present invention provides an electronic device, including: at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a vehicle networking-based computing offload and resource allocation method as described above.
To achieve the above object, a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the above-described method for computing offloading and resource allocation based on internet of vehicles.
It is to be understood that the advantages of the second to fourth aspects compared with the related art are the same as those of the first aspect compared with the related art, and reference may be made to the related description in the first aspect, which is not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the related technical descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for computing offloading and resource allocation based on Internet of vehicles according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a vehicle edge computing system provided in accordance with one embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Introducing related words;
(1) Calculating and unloading: the vehicle terminal generates a task, the task is unloaded to other equipment for processing, and then a calculation result is returned;
(2) The reinforcement learning DRL is an algorithm for making decisions by training an Actor network, comprises an On-Line training mode and an Off-Line training mode, can calculate corresponding actions according to the current state, and enters the next state;
(3) The cooperative game theory is different from the non-cooperative game theory, pursues collective benefit maximization, and is a cooperative game adopting complete information, namely a centralized decision process;
(4) Computing resource allocation: since there may be multiple tasks offloaded to the same server, the server needs to process multiple tasks in parallel, which involves problems with computing resource allocation (mainly CPU frequency).
Description of the embodiments;
advances in Artificial Intelligence (AI) and internet of vehicles (IOV) technology have played a vital role in the rapid development of vehicle applications in the areas of augmented reality (Augmented Reality, AR), virtual Reality (VR), autopilot, multimedia and video analytics, etc., which can improve the driving experience of drivers and passengers. However, correspondingly, new applications are increasingly demanding in terms of low latency communication, computation, and buffering resources. Although cloud computing network architectures may provide computing and caching resources for vehicles over a centralized network, low latency requirements for vehicle network applications cannot be met due to the long communication links. The on-board edge computing (VEC) is one of mobile edge computing (Mobile Edge Computing, MEC), and as a distributed network architecture, the content and functions of the centralized network can be distributed to the edge side, shortening the transmission path between the vehicle and the server, and thus reducing the processing delay of the user. Thus, the VEC can act as an efficient framework for handling vehicle requests under the vehicle network.
The calculation unloading refers to that a vehicle terminal generates a task, the task is unloaded to other equipment for processing, and then a calculation result is returned. The computing resource allocation refers to the problem that the server needs to process multiple tasks in parallel, because multiple tasks may be offloaded to the same server, which involves CPU frequency allocation. When only a single vehicle initiates a task, an unloading scheme is easier to calculate, but when a plurality of vehicles initiate heterogeneous requests, how to design an excellent calculation unloading strategy and a calculation resource allocation strategy to optimize the overall delay of the system is a technical problem to be solved urgently at present.
Referring to fig. 1, an embodiment of the present application provides a method for computing and offloading and resource allocation based on the internet of vehicles, the method for computing and offloading and resource allocation based on the internet of vehicles including:
Step S110, if a plurality of vehicles in the vehicle-mounted edge computing system request task unloading, setting an unloading strategy of the vehicle and an iteration ending condition of a resource allocation strategy of an edge server; the iteration end condition includes at least: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle that is located within the coverage of edge server.
Step S120, executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
According to a preset unloading method, an unloading strategy of the current wheel iteration is obtained, wherein the unloading strategy comprises the steps of determining an unloading object corresponding to each vehicle in a plurality of vehicles; the preset unloading method is to calculate the unloading strategy by adopting the cooperative game theory to minimize task completion delay as a target according to the resource allocation strategy of the previous iteration.
Obtaining a resource allocation strategy of the current round of iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration.
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition; the iteration ends and if the iteration end condition is not met, the next iteration is started.
According to the method, if a plurality of vehicles initiate heterogeneous tasks, on one hand, the unloading strategy of each round is calculated with the aim of minimizing task completion delay of the heterogeneous tasks according to the guidance of a cooperative game theory, on the other hand, the resource allocation strategy of each round is calculated with the aim of minimizing task completion delay of the heterogeneous tasks by using a DDPG method, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front-back two-round iteration is smaller than a preset value, the obtained final unloading strategy and the final resource allocation strategy can furthest reduce the delay of completing the heterogeneous tasks generated by a plurality of vehicles at the same time, and the overall performance of a vehicle edge computing system is improved.
For ease of understanding, a detailed embodiment is provided:
Referring to fig. 2, the on-board edge computing system has a three-tier architecture, a cloud, an edge server, and a vehicle. If vehicles run under the edge server, the collection/> is composed. Wherein. Use/> denotes an edge server,/> denotes a vehicle under the coverage of the edge server communication range, and/> denotes a cloud server. Assume that at/> times, there are/> vehicles that can choose to compute locally, offload to an edge server, offload to other vehicles, or offload to a cloud server when they initiate a compute task request. Before introducing the offloading policy and the resource allocation policy of the present embodiment, examples of one of a motion model, a communication model, and a calculation model of the on-vehicle edge calculation system are described below:
a motion model;
The distance change formula between the vehicle node-vehicle node, the vehicle node-edge server node over time can be estimated due to predictability of vehicle motion. According to the wireless communication theory and the wireless communication model, the uplink and downlink transmission rates between the nodes can be calculated a priori, and a foundation is made for subsequent work.
And constructing a driving track prediction model according to the position information prediction at the time and the position information at the time . Assuming that the vehicles are all in unidirectional uniform-speed running, establishing unidirectional multi-lane two-dimensional coordinates according to a parallel road of the running direction of the vehicles, assuming that the vehicle initiates a calculation task request at the moment/> , the two-dimensional coordinates of the vehicle/> at the moment/> are/> , and the two-dimensional coordinates of the vehicle/> at the moment/> are:
Wherein represents the speed of the vehicle/> at/> ; the/> function is used to generate random integers, i.e. only allow lane changes once at most in/> time; and/> denotes the lane width. The euclidean distance between the initiating requesting vehicle/> and the edge server at/> times can be obtained.
A communication model;
since data transmission involves communication between vehicles, between vehicles and edge servers, and between edge servers and vehicles, it is necessary to determine the communication rate of data exchange between nodes.
Considering the difference between V2V (Vehicle to Vehicle) and V2I (vehicle-to-information) communications, and the difference between uplink and downlink, the vehicle-to-edge server and vehicle-to-vehicle communication rates are defined as follows:
the transmission rate (V2V) between the initiating vehicle and the node (vehicle)/> of the computing task request/> is:
Since a vehicle can upload a cache resource as a relay through an edge server when no resources are available to a nearby vehicle, and can offload a calculation task to the edge server through V2I communication, communication rates between the vehicle and the edge server and between the edge server and the vehicle are expressed as follows:
Wherein and/> represent the bandwidths of the V2V and V2I channels, respectively. The expression "" is used to denote the channel fading factor of the upload link,/> is used to denote the path loss factor,/> is used to denote the euclidean distance between the vehicle and the edge server,/> is used to denote the gaussian noise density of the communication system,/> is used to denote the transmission power of the vehicle and/> is used to denote the transmission power of the edge server.
And the edge server and the cloud server are communicated through a port. Assuming that the communication rate between the cloud and the edge server is stable, it does not change over time , denoted/> .
Further, the communication radius of the vehicle and the edge server are denoted and/> , respectively.
Computational models (showing task completion delays when a vehicle initiates various computational task requests);
in the case of a vehicle initiating a task request, the calculation result is assumed to be small, irrespective of the delay in the transmission of the result. Consider the case where a vehicle task is calculated on multiple nodes: local computing, offloading to edge servers, offloading to cloud servers, and offloading to nearby vehicles.
Is used to represent the delay of vehicle/> when time/> initiates the task request task/> .
Case (1): when the vehicle performs the local calculation, the delay time required for the task calculation includes the calculation time of the task and the delay time for calling the vehicle cache resource, and the calculation formula is as follows:
Where represents the computational power of node/> associated with vehicle/> ,/> represents the latency of invoking vehicle cache resources,/> represents the size of the task, and/> represents the complexity of task computation. The cache resource is a tool for solving task calculation, and can be set at a corresponding node in advance to reduce the calling time, if the selected unloading object does not have the cache resource after the task is initiated, the resource needs to be called, and the delay time for calling the vehicle cache resource can occur.
Case (2): when the vehicle offloads the task to the edge server, it chooses to make the calculation on the service node/> .
The latency required to offload task input data to an edge server, the latency to invoke the cache resources required for the task, and the computational latency of the edge server. According to the caching model described above. The total delay offloaded to the edge server is expressed as:
where represents the computational resources of vehicle/> allocated to edge server node/> , variable/> represents whether the edge server has cached the resources required for request/> , and/> represents the cache call delay of the edge server when vehicle/> initiates task/> at/> .
Case (3): when the vehicle offloads a task to the cloud server/> , the required delay includes the time to upload the input data to the cloud server through the edge server and the computation time of the cloud server.
Wherein represents the computing power allocated to each task in the cloud server.
Case (4): when the vehicle is unloading the mission to a nearby vehicle. Since the computing power of the vehicles is assumed to be the same, the unloading decision need not take into account the computing power differences between the vehicles. When considering offloading tasks to other vehicles, only vehicles within one-hop range of vehicle/> V2V communication range are considered, and all vehicles have cache resources (the cache resources are tools for solving task calculation). The delay required for unloading is calculated as follows:
In all of the above, the delay of the vehicle is expressed as:
,
Wherein a decision variable is set to indicate whether vehicle/> is unloading tasks to computing node/> at/> . When/> , vehicle/> chooses to offload tasks to node/> .
After introducing the motion model, the communication model and the calculation model described above, the present embodiment is used to calculate the final offloading policy and the resource allocation policy by an iterative method of offloading policy and resource allocation (see the pseudo code of the subsequent method 2 for details), where the offloading policy is calculated based on the cooperative game theory, and the resource allocation policy is calculated based on DDPG (i.e. reinforcement learning) mode.
When a vehicle will initiate a computational task request, it sends task related information and its state information to the edge server in a compact header file format. The edge server collects mission information from a plurality of vehicles at a current time and designs an offloading scheme for the vehicle edge computing system based on the collected information. Thus, the computational offload problem of a multitasking vehicle can be conceptualized as a fully informed static cooperative gaming problem. For vehicles that initiate computing tasks, policies (offload policies) are set, aimed at reducing the overall computing latency of the system.
When a plurality of vehicles in the vehicle-mounted edge computing system initiate tasks at the same moment, in order to reduce the overall delay of the multitasking vehicles as much as possible, when one vehicle is about to initiate a computing task request, the related information of the vehicle for initiating the computing task and the vehicle position information can be sent to an edge server, the task information of the plurality of vehicles at the current moment is collected by the edge server, and the unloading strategy of the system is formulated. Modeling a multitasking vehicle computing offload problem as a complete information static cooperative game problem as follows:
A vehicle initiating a computing task request: at each time , a set/> of vehicles is defined that represent the initiating computing task requests. A common goal between vehicles initiating computing task requests is to shorten the task completion delay in completing all vehicle computing task requests throughout the on-board edge computing system.
Calculating an unloading strategy: the set of policies for the offload objects available to the vehicle that initiated the computing task request is denoted , and each/> in/> can select one offload policy/> from the respective set of policies/> , where each policy/> in/> represents the choice of offload object by/> . Thus, the policy set/> of/> consists of nodes/> that can be selectively computed in/> . After the game decision system offload decisions, the final policy/> of/> will be determined, resulting in an offload policy/> .
Benefit function: the embodiment is provided with the effective function in the cooperative game, and aims to reduce task completion delay of the vehicle-mounted edge computing system aiming at computing task requests to the greatest extent. The utility function is expressed as:
,
Wherein represents revenue, determined by the strategy/> , with the overall revenue of the vehicle edge computing system as the revenue for each individual in the collaborative game, the vehicle edge computing system revenue is higher when the overall vehicle edge computing system delay is smaller, corresponding to the increase in individual revenue. In a full information cooperative game/> within the vehicle edge computing system is negotiated to determine the offload object that maximizes the utility function. This is done to create a collective environment that tends to maximize the collective benefit per/> . Furthermore, situations may arise where a person's irrational behavior, i.e., a vehicle user may change his strategy to reduce the overall task latency of the system, and even unfortunately increase his personal computing latency (within acceptable latency thresholds).
At , to refine/> strategy set and optimize search space, the scope of search parameters is narrowed, and the time required to find the optimal solution for gaming is reduced. A strict prevention of inferior policies is proposed, i.e. given policies/> for/> , if , policies/> can be culled from/> . The following is the pseudo code of the offloading method of calculating the offloading policy provided in this embodiment:
Method 1: unloading method under guidance of cooperative game theory
Input:
State of the vehicle edge computing system, initiate a computing task request at ;
system cache decisions (advanced settings);
Resource allocation policy (see later for details);
and (3) outputting:
Unloading strategy ;
Initializing a list [/>,/>, ... ,/>;
At , selecting a set of available computing offload objects, namely initializing a/> policy set/> [/>,/>, ...,/> ], all/> in/> , for/> according to the cache limit and the vehicle communication range limit;
Initializing a utility function ;
and 4, strictly stopping an inferior strategy:
5:FOR eachin/>
6/> in the policy set/> for ;
Removing to generate a residual strategy set/> ;
8:TRUE;
9:FOR eachin/>
10:FOR eachin/>'s strategy set/>
Recursively generating a policy combination ,/>;
12:IF
13:FALSE;
14:BREAK
15:ENDIF
16: ENDFOR
17:ENDFOR
18: IF
19: Removefrom/>
20:ENDIF
21:ENDFOR
22:ENDFOR
23, a complete uninstalled object set of RETURN;
24, negotiation algorithm:
25, optimal comprehensive utility ;
Optimal policy combination ;
27:FORin/>do;
28:FORin/>do;
other policy combinations recursive selection of policy combinations/> ;
30, current total utility ;
31:IFthen;
32:
33:
34:ENDIF
35:ENDFOR
36:ENDFOR
37: RETURNupdated by/>
The following is an illustration of pseudo code:
In the first step (line 1 to line 3), the available offload objects are initialized according to the communication range of the vehicle and the deployed buffer resource conditions.
Second step (line 4 to line 23): the strategy set of is simplified using a strict culling inferior strategy, the core idea is as follows: when a certain vehicle initiates a strategy, no matter what strategy is adopted by other vehicles initiating tasks, the strategy is free from high benefits brought by other strategies of the vehicle, and can be a bad strategy to be removed.
Third (line 25 to line 37), the vehicle edge computing system recursively selects policy sets, recursively generates policy sets for all vehicles participating in the computing task, and calculates the most profitable policy set.
In view of the offloading policy a determined in the cooperative game described above, it is possible to offload multiple computing task requests to an edge server. The edge server has the ability to allocate computing resources for different tasks, which can be seen as a continuous action space due to its computing resource's separability. The computing resource allocation optimization problem is translated into a Markov Decision Process (MDP) and DDPG is employed to formulate a computing resource allocation policy. The present embodiment sets a status space, an action space, and a bonus space by:
state space: when the edge server receives multiple tasks simultaneously, to model them as MDPs, the tasks that were offloaded to the vehicle edge computing system are built as a task queue with a time order. Wherein task queue/> has its original eigenvalue/> . This transformation allows the state sequence/> to be formulated within the MDP framework and decisions to be made based on the task characteristics of each step in the sequence, and in addition, the computing resource size of the edge servers can also have an impact on the allocation policy. The/> in the input state is determined according to the task queue with timing determined above, as well as the remaining computing resources of the edge server.
Action space: when a task is offloaded to an edge server, the edge server may allocate the maximum remaining capacity of computing resources for the task offloaded to the edge server. To export the corresponding computing resource size, an allocation action is formulated for the remaining computing resources using the Actor network. Where/> denotes the allocation proportion of the remaining servers, the computational resource size allocated to a task in state/> can be expressed using/> . Thus, the state transition matrix can be expressed as:
Bonus space: for each task, once the assigned computing resources are determined, the computation delay for that task may be computed. Since the primary goal is to minimize overall system latency, task completion latency can be treated as an immediate reward. To speed up convergence, the instant prize is set to:
wherein the prize is the ratio between the delay variance and the dispense frequency variance. When the benefit generated by the optimization delay is greater than the optimal allocation frequency, the reward function increases, indicating that acts better under/> conditions.
DDPG is an existing method, and the implementation of this embodiment includes designing Reset, step, and so on functions of DDPG network corresponding to the state space, action space, and rewards space of the design, and the specific workflow of DDPG is as follows:
(1) And (3) sampling: the method comprises the steps of randomly taking a state S, inputting the state S into an Actor current network, selecting an action A according to the state S by the Actor current network, inputting the action A into an Environment, outputting a corresponding reward R and a next state S 'by the Environment, forming five-tuple transitions (S, A, R, S', done), simultaneously putting the five-tuple into an experience playback pool, judging the number of data in the experience playback pool once every time when one transition data is put, executing a training process if the pool is full (or reaches a preset threshold), and otherwise, continuously executing a sampling process; the next state S ' is input into the Actor target network to select an action A ' corresponding to the next state S ', and the action A ' is reserved for the Critic target network in the training process to calculate the Q ' value;
(2) Training process: n pieces of data are taken out from the pool to be decompressed, and the data which are grouped are respectively transmitted to a Critic current network and a Critic target network;
(3) Parameter updating flow: the network parameters are updated every round in a soft update manner.
By utilizing the unloading method and the resource allocation method, a double-layer optimization iteration method is provided, and an enhanced calculation unloading strategy and resource allocation strategy are obtained. The improvement offloading policy and the resource allocation policy are iterated in an alternating manner until convergence is reached. At initialization, the average allocation policy may be used as an initial resource allocation policy and an initial offloading policy.
Method 2: iterative optimization method for unloading strategy and resource allocation strategy
Input:
Initializing a computing resource allocation policy ;
initializing an offloading policy ;
Initializing a task overall completion delay ,/>;
Iteration number ;
iteration threshold ;
and (3) outputting:
,/> after optimization;
Task completion delay when the vehicle initiates a compute task request;
1: FORto/>do;
Updating an unloading strategy;
Obtaining an unloading strategy by using the unloading method;
Update offload policies (i.e./> assigned to/> );
updating a resource allocation strategy;
Obtaining a computing resource allocation policy using the above-described "resource allocation method";
Updating the resource allocation strategy of the last iteration;
8, checking termination conditions;
9 calculating the overall completion delay/> from and/> ;
IF or relative increase/> ;
11:BREAK
12:ENDIF
13, updating ;
Return final , final/> , final/> ;
The explanation of the pseudocode is as follows:
and initializing a network for computing resource allocation at the initial stage, making a computing unloading decision on the basis, obtaining the delay of computing completion through the computing unloading strategy, and then on the premise of determining the computing unloading strategy, outputting new computing completion delay through updating the network for computing resource allocation strategy, wherein the condition of iteration completion is that the difference value of the two delays is smaller than a threshold value, and the convergence can be judged.
According to the method, the unloading strategy of the vehicle and the resource allocation strategy of the edge server are simultaneously optimized, the unloading strategy of each round is calculated according to the cooperative game theory and with the task completion delay of the minimum heterogeneous task as a target, the DDPG method calculates the resource allocation strategy of each round with the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of completing heterogeneous tasks generated by a plurality of vehicles simultaneously, and the overall performance of a vehicle edge computing system is improved.
An embodiment of the present application provides a computing unloading and resource allocation system based on the internet of vehicles, including:
A request acquisition unit, configured to set an unloading policy of a vehicle and an iteration end condition of a resource allocation policy of an edge server if a plurality of vehicles in a vehicle-mounted edge computing system request task unloading; the iteration end condition includes at least: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of edge server;
The iteration calculation unit is used for executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
according to a preset unloading method, an unloading strategy of the current wheel iteration is obtained, wherein the unloading strategy comprises the steps of determining an unloading object corresponding to each vehicle in a plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to the resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of the current round of iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method is to adopt DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition; the iteration ends and if the iteration end condition is not met, the next iteration is started.
It should be noted that, the computing and unloading and resource allocation system based on the internet of vehicles provided in this embodiment and the computing and unloading and resource allocation method based on the internet of vehicles described above are based on the same inventive concept, so that the relevant content of the computing and unloading and resource allocation method based on the internet of vehicles described above is also applicable to the content of the computing and unloading and resource allocation system based on the internet of vehicles, and therefore, will not be described herein.
As shown in fig. 3, the embodiment of the present application further provides an electronic device, where the electronic device includes:
at least one memory;
At least one processor;
At least one program;
The programs are stored in the memory, and the processor executes at least one program to implement the disclosed method of computing offloading and resource allocation based on the internet of vehicles described above. According to the method, the unloading strategy of the vehicle and the resource allocation strategy of the edge server are simultaneously optimized, the unloading strategy of each round is calculated according to the cooperative game theory and with the task completion delay of the minimum heterogeneous task as a target, the DDPG method calculates the resource allocation strategy of each round with the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of completing the heterogeneous tasks generated by a plurality of vehicles simultaneously, and the overall performance of the vehicle edge computing system is improved.
The electronic device may be any intelligent terminal including a mobile phone, a tablet computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a vehicle-mounted computer, and the like.
The electronic device according to the embodiment of the application is described in detail below.
The processor 1600 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present invention;
The memory 1700 may be implemented in the form of Read Only Memory (ROM), static storage, dynamic storage, or random access memory (Random Access Memory, RAM). Memory 1700 may store an operating system and other application programs, program code that when implementing the techniques provided by embodiments of the present specification by software or firmware is stored in memory 1700 and invoked by processor 1600 to perform the computer offload and resource allocation methods of embodiments of the present invention based on the Internet of vehicles.
An input/output interface 1800 for implementing information input and output;
the communication interface 1900 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (such as USB, network cable, etc.), or can realize communication in a wireless manner (such as mobile network, WIFI, bluetooth, etc.);
Bus 2000, which transfers information between the various components of the device (e.g., processor 1600, memory 1700, input/output interface 1800, and communication interface 1900);
Wherein processor 1600, memory 1700, input/output interface 1800, and communication interface 1900 enable communication connections within the device between each other via bus 2000.
The embodiment of the invention also provides a storage medium which is a computer readable storage medium, wherein the computer readable storage medium stores computer executable instructions which are used for enabling a computer to execute the computer unloading and resource allocation method based on the internet of vehicles. According to the method, the unloading strategy of the vehicle and the resource allocation strategy of the edge server are simultaneously optimized, the unloading strategy of each round is calculated according to the cooperative game theory and with the task completion delay of the minimum heterogeneous task as a target, the DDPG method calculates the resource allocation strategy of each round with the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of completing the heterogeneous tasks generated by a plurality of vehicles simultaneously, and the overall performance of the vehicle edge computing system is improved.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the present invention are for more clearly describing the technical solutions of the embodiments of the present invention, and do not constitute a limitation on the technical solutions provided by the embodiments of the present invention, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present invention are applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the embodiments of the invention are not limited by the illustrations, and that more or fewer steps than those shown may be included, or certain steps may be combined, or different steps may be included.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing a program.
While the preferred embodiments of the present application have been described in detail, the embodiments of the present application are not limited to the above-described embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the embodiments of the present application, and these equivalent modifications or substitutions are included in the scope of the embodiments of the present application as defined in the appended claims.

Claims (10)

1. The method for computing and unloading and resource allocation based on the Internet of vehicles is characterized by comprising the following steps of:
If a plurality of vehicles in the vehicle-mounted edge computing system request task unloading, setting an unloading strategy of the vehicle and an iteration ending condition of a resource allocation strategy of an edge server; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
Executing an iteration process until an unloading strategy of the current round of iteration and a resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
Obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
2. The method for computing and offloading and resource allocation of internet of vehicles according to claim 1, wherein the obtaining an offloading policy of a current round of iteration according to a preset offloading method comprises:
Determining available unloading objects of each vehicle according to the resource allocation strategy of the previous iteration and the system state of the vehicle-mounted edge computing system;
Recursively generating unloading object sets of the vehicles from available unloading objects of each vehicle, and calculating the unloading object set with the biggest profit as an unloading strategy of the current round iteration; the benefit is related to a task completion delay generated by the plurality of vehicles selecting corresponding offload objects, respectively.
3. The internet of vehicles-based computing offload and resource allocation method of claim 2, wherein prior to recursively generating the set of offload objects for the vehicle, the internet of vehicles-based computing offload and resource allocation method further comprises:
Calculating a first benefit obtained when a first unloading object is selected by a first vehicle; the first vehicle is any one of the plurality of vehicles, the first unloading object is any one of all available unloading objects of the first vehicle, and the first benefit is a benefit of the plurality of vehicles when the first vehicle selects the first unloading object;
if the first benefit is less than the second benefit, rejecting a policy that the first vehicle selects the first offload object; the second benefit is a benefit of the plurality of vehicles when the first vehicle selects any one of the available offload objects other than the first offload object.
4. The internet of vehicles-based computing offload and resource allocation method of claim 2, wherein the benefit is represented by the following function:
Wherein represents the task completion delay of vehicle/> initiating a task at time/> .
5. The internet of vehicles-based computing offload and resource allocation method of claim 1, wherein the state space in DDPG comprises:
Where denotes the size of the task,/> denotes the computational complexity of the task, and/> denotes the computational resources remaining by the edge server.
6. The internet of vehicles based computing offload and resource allocation method of claim 5, wherein the bonus space in DDPG comprises:
Where denotes the task completion delay under average allocation,/> denotes the task completion delay for computing tasks according to DDPG,/> denotes the computing resources allocated to tasks by the edge server in state/> , and/> denotes the computing resources bisected by the edge server to tasks.
7. The internet of vehicles-based computing offload and resource allocation method of claim 1, wherein the iteration end condition further comprises a maximum number of iterations;
and if the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration comprises:
If the difference between the task completion delay of the unloading strategy and the resource allocation strategy of the current iteration and the task completion delay of the unloading strategy and the resource allocation strategy of the previous iteration is smaller than a preset value, or if the current iteration number reaches the maximum iteration number, the iteration is ended.
8. A computing offload and resource allocation system based on the internet of vehicles, the computing offload and resource allocation system based on the internet of vehicles comprising:
A request acquisition unit, configured to set an unloading policy of a vehicle and an iteration end condition of a resource allocation policy of an edge server if a plurality of vehicles in a vehicle-mounted edge computing system request task unloading; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
the iteration calculation unit is used for executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
Obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
9. An electronic device, comprising: at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the internet of vehicles-based computing offload and resource allocation method of any one of claims 1-7.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the internet of vehicles-based computing offload and resource allocation method of any one of claims 1 to 7.
CN202410229713.9A 2024-02-29 Computing unloading and resource allocation method and system based on Internet of vehicles Active CN117891613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410229713.9A CN117891613B (en) 2024-02-29 Computing unloading and resource allocation method and system based on Internet of vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410229713.9A CN117891613B (en) 2024-02-29 Computing unloading and resource allocation method and system based on Internet of vehicles

Publications (2)

Publication Number Publication Date
CN117891613A true CN117891613A (en) 2024-04-16
CN117891613B CN117891613B (en) 2024-05-31

Family

ID=

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941667A (en) * 2019-11-07 2020-03-31 北京科技大学 Method and system for calculating and unloading in mobile edge calculation network
CN112685163A (en) * 2021-01-06 2021-04-20 北京信息科技大学 Computing unloading method based on mobile edge computing and mobile edge computing server
CN113543074A (en) * 2021-06-15 2021-10-22 南京航空航天大学 Joint computing migration and resource allocation method based on vehicle-road cloud cooperation
CN113821270A (en) * 2021-07-29 2021-12-21 长沙理工大学 Task unloading sequence prediction method, decision-making method, electronic device and storage medium
CN114745389A (en) * 2022-05-19 2022-07-12 电子科技大学 Computing offloading method for mobile edge computing system
CN115696452A (en) * 2022-10-21 2023-02-03 云南大学 Game method for joint optimization of unloading decision and resource allocation in cloud-edge cooperative computing
WO2023040022A1 (en) * 2021-09-17 2023-03-23 重庆邮电大学 Computing and network collaboration-based distributed computation offloading method in random network
WO2023160012A1 (en) * 2022-02-25 2023-08-31 南京信息工程大学 Unmanned aerial vehicle assisted edge computing method for random inspection of power grid line

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941667A (en) * 2019-11-07 2020-03-31 北京科技大学 Method and system for calculating and unloading in mobile edge calculation network
CN112685163A (en) * 2021-01-06 2021-04-20 北京信息科技大学 Computing unloading method based on mobile edge computing and mobile edge computing server
CN113543074A (en) * 2021-06-15 2021-10-22 南京航空航天大学 Joint computing migration and resource allocation method based on vehicle-road cloud cooperation
CN113821270A (en) * 2021-07-29 2021-12-21 长沙理工大学 Task unloading sequence prediction method, decision-making method, electronic device and storage medium
WO2023040022A1 (en) * 2021-09-17 2023-03-23 重庆邮电大学 Computing and network collaboration-based distributed computation offloading method in random network
WO2023160012A1 (en) * 2022-02-25 2023-08-31 南京信息工程大学 Unmanned aerial vehicle assisted edge computing method for random inspection of power grid line
CN114745389A (en) * 2022-05-19 2022-07-12 电子科技大学 Computing offloading method for mobile edge computing system
CN115696452A (en) * 2022-10-21 2023-02-03 云南大学 Game method for joint optimization of unloading decision and resource allocation in cloud-edge cooperative computing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUJIONG LIU 等: "A Computation Offloading Algorithm Based on Game Theory for Vehicular Edge Networks", 2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 30 July 2018 (2018-07-30), pages 1 - 6 *
曹敦 等: "V2X 多节点协同分布式卸载策略", 通信学报, 28 February 2022 (2022-02-28), pages 185 - 194 *

Similar Documents

Publication Publication Date Title
CN111835827B (en) Internet of things edge computing task unloading method and system
CN111711666B (en) Internet of vehicles cloud computing resource optimization method based on reinforcement learning
CN112469001B (en) Application migration method and device, electronic equipment and storage medium
CN113760511B (en) Vehicle edge calculation task unloading method based on depth certainty strategy
CN115297171B (en) Edge computing and unloading method and system for hierarchical decision of cellular Internet of vehicles
CN113132943A (en) Task unloading scheduling and resource allocation method for vehicle-side cooperation in Internet of vehicles
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN111831427A (en) Distributed inter-vehicle task unloading method based on mobile edge calculation
CN113626104A (en) Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture
CN116390125A (en) Industrial Internet of things cloud edge cooperative unloading and resource allocation method based on DDPG-D3QN
CN113747507B (en) 5G ultra-dense network-oriented computing resource management method and device
CN116321298A (en) Multi-objective joint optimization task unloading strategy based on deep reinforcement learning in Internet of vehicles
CN116233927A (en) Load-aware computing unloading energy-saving optimization method in mobile edge computing
CN115103313A (en) Intelligent road task cooperative processing method and system based on position prediction
Henna et al. Distributed and collaborative high-speed inference deep learning for mobile edge with topological dependencies
CN117891613B (en) Computing unloading and resource allocation method and system based on Internet of vehicles
CN113821270B (en) Task unloading sequence prediction method, decision method, electronic device and storage medium
CN117891613A (en) Computing unloading and resource allocation method and system based on Internet of vehicles
Yu et al. Virtual reality in metaverse over wireless networks with user-centered deep reinforcement learning
CN114815755A (en) Method for establishing distributed real-time intelligent monitoring system based on intelligent cooperative reasoning
CN115052262A (en) Potential game-based vehicle networking computing unloading and power optimization method
CN114928826A (en) Two-stage optimization method, controller and decision method for software-defined vehicle-mounted task unloading and resource allocation
CN113766540A (en) Low-delay network content transmission method and device, electronic equipment and medium
CN114723058A (en) Neural network end cloud collaborative reasoning method and device for high-sampling-rate video stream analysis
CN114828047A (en) Multi-agent collaborative computing unloading method in 5G mobile edge computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant