CN117891613B - Computing unloading and resource allocation method and system based on Internet of vehicles - Google Patents

Computing unloading and resource allocation method and system based on Internet of vehicles Download PDF

Info

Publication number
CN117891613B
CN117891613B CN202410229713.9A CN202410229713A CN117891613B CN 117891613 B CN117891613 B CN 117891613B CN 202410229713 A CN202410229713 A CN 202410229713A CN 117891613 B CN117891613 B CN 117891613B
Authority
CN
China
Prior art keywords
unloading
iteration
resource allocation
strategy
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410229713.9A
Other languages
Chinese (zh)
Other versions
CN117891613A (en
Inventor
曹敦
王毓斌
杨逸帆
易国栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangjiang Laboratory
Original Assignee
Xiangjiang Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangjiang Laboratory filed Critical Xiangjiang Laboratory
Priority to CN202410229713.9A priority Critical patent/CN117891613B/en
Publication of CN117891613A publication Critical patent/CN117891613A/en
Application granted granted Critical
Publication of CN117891613B publication Critical patent/CN117891613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a calculation unloading and resource allocation method and a system based on the Internet of vehicles, wherein the method considers the unloading strategy of a vehicle and the resource allocation strategy of an edge server to be optimized simultaneously, calculates the unloading strategy of each round by taking the task completion delay of a minimum heterogeneous task as a target under the guidance of a cooperative game theory, calculates the resource allocation strategy of each round by taking the task completion delay of the minimum heterogeneous task as the target according to a DDPG (DEEP DETERMINISTIC Policy Gradient) method, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of the heterogeneous task generated by completing a plurality of vehicles simultaneously, thereby improving the overall performance of the vehicle edge calculation system.

Description

Computing unloading and resource allocation method and system based on Internet of vehicles
Technical Field
The embodiment of the application relates to the technical field of Internet of vehicles, in particular to a method and a system for computing unloading and resource allocation based on Internet of vehicles.
Background
Advances in Artificial Intelligence (AI) and internet of vehicles (IOV) technology have played a vital role in the rapid development of vehicle applications in the areas of augmented Reality (Augmented Reality, AR), virtual Reality (VR), autopilot, multimedia and video analytics, etc., which can improve the driving experience of drivers and passengers. However, correspondingly, new applications are increasingly demanding in terms of low latency communication, computation, and buffering resources. Although cloud computing network architectures may provide computing and caching resources for vehicles over a centralized network, low latency requirements for vehicle network applications cannot be met due to the long communication links. The on-board edge Computing (VEC) is one of mobile edge Computing (Mobile Edge Computing, MEC), and as a distributed network architecture, the content and functions of the centralized network can be distributed to the edge side, shortening the transmission path between the vehicle and the server, and thus reducing the processing delay of the user. Thus, the VEC can act as an efficient framework for handling vehicle requests under the vehicle network.
The calculation unloading refers to that a vehicle terminal generates a task, the task is unloaded to other equipment for processing, and then a calculation result is returned. The computing resource allocation refers to the problem that the server needs to process multiple tasks in parallel, because multiple tasks may be offloaded to the same server, which involves CPU frequency allocation. When only a single vehicle initiates a task, an unloading scheme is easier to calculate, but when a plurality of vehicles initiate heterogeneous requests, how to design an excellent calculation unloading strategy and a calculation resource allocation strategy to optimize the overall delay of the system is a technical problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the invention mainly aims to provide a calculation unloading and resource allocation method and system based on the Internet of vehicles, which can furthest reduce the delay of completing heterogeneous tasks generated by a plurality of vehicles at the same time and improve the overall performance of a vehicle edge calculation system.
To achieve the above object, a first aspect of the embodiments of the present invention provides a method for computing and offloading and resource allocation based on the internet of vehicles, where the method for computing and offloading and resource allocation based on the internet of vehicles includes:
If a plurality of vehicles in the vehicle-mounted edge computing system request task unloading, setting an unloading strategy of the vehicle and an iteration ending condition of a resource allocation strategy of an edge server; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
executing an iteration process until an unloading strategy of the current round of iteration and a resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
According to the method, the unloading strategy of each round is calculated by taking the task completion delay of the minimum heterogeneous task as a target under the guidance of a cooperative game theory, DDPG (DEEP DETERMINISTIC Policy Gradient) method is used for calculating the resource allocation strategy of each round by taking the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of the heterogeneous task generated by a plurality of vehicles at the same time, so that the overall performance of the vehicle edge computing system is improved.
In some embodiments, the obtaining the unloading strategy of the current round iteration according to the preset unloading method includes:
Determining available unloading objects of each vehicle according to the resource allocation strategy of the previous iteration and the system state of the vehicle-mounted edge computing system;
Recursively generating unloading object sets of the vehicles from available unloading objects of each vehicle, and calculating the unloading object set with the biggest profit as an unloading strategy of the current round iteration; the benefit is related to a task completion delay generated by the plurality of vehicles selecting corresponding offload objects, respectively.
In some embodiments, before recursively generating the set of offload objects for the vehicle, the internet of vehicles-based computing offload and resource allocation method further comprises:
Calculating a first benefit obtained when a first unloading object is selected by a first vehicle; the first vehicle is any one of the plurality of vehicles, the first unloading object is any one of all available unloading objects of the first vehicle, and the first benefit is a benefit of the plurality of vehicles when the first vehicle selects the first unloading object;
if the first benefit is less than the second benefit, rejecting a policy that the first vehicle selects the first offload object; the second benefit is a benefit of the plurality of vehicles when the first vehicle selects any one of the available offload objects other than the first offload object.
In some embodiments, the benefit is represented by the following function:
wherein, Representing vehicle/>At time/>The completion of the task that initiated the task is delayed.
In some embodiments, the state space in DDPG includes:
wherein, Representing the size of the task,/>Representing the computational complexity of a task,/>Representing the computing resources remaining by the edge server.
In some embodiments, the bonus space in DDPG includes:
wherein, Representing task completion delays under average allocation,/>Representing task completion delays for computing tasks according to DDPG,/>Representing edge server in State/>Computing resources allocated to tasks,/>Representing computing resources that the edge server bisects into tasks.
In some embodiments, the iteration end condition further comprises a maximum number of iterations;
And if the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration comprises:
If the difference between the task completion delay of the unloading strategy and the resource allocation strategy of the current iteration and the task completion delay of the unloading strategy and the resource allocation strategy of the previous iteration is smaller than a preset value, or if the current iteration number reaches the maximum iteration number, the iteration is ended.
To achieve the above object, a second aspect of the embodiments of the present invention provides a computing offloading and resource allocation system based on the internet of vehicles, including:
A request acquisition unit, configured to set an unloading policy of a vehicle and an iteration end condition of a resource allocation policy of an edge server if a plurality of vehicles in a vehicle-mounted edge computing system request task unloading; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
The iteration calculation unit is used for executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
To achieve the above object, a third aspect of the embodiments of the present invention provides an electronic device, including: at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform a vehicle networking-based computing offload and resource allocation method as described above.
To achieve the above object, a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the above-described method for computing offloading and resource allocation based on internet of vehicles.
It is to be understood that the advantages of the second to fourth aspects compared with the related art are the same as those of the first aspect compared with the related art, and reference may be made to the related description in the first aspect, which is not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the related technical descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for computing offloading and resource allocation based on Internet of vehicles according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a vehicle edge computing system provided in accordance with one embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Introducing related words;
(1) Calculating and unloading: the vehicle terminal generates a task, the task is unloaded to other equipment for processing, and then a calculation result is returned;
(2) The reinforcement learning DRL is an algorithm for making decisions by training an Actor network, comprises an On-Line training mode and an Off-Line training mode, can calculate corresponding actions according to the current state, and enters the next state;
(3) The cooperative game theory is different from the non-cooperative game theory, pursues collective benefit maximization, and is a cooperative game adopting complete information, namely a centralized decision process;
(4) Computing resource allocation: since there may be multiple tasks offloaded to the same server, the server needs to process multiple tasks in parallel, which involves problems with computing resource allocation (mainly CPU frequency).
Description of the embodiments;
Advances in Artificial Intelligence (AI) and internet of vehicles (IOV) technology have played a vital role in the rapid development of vehicle applications in the areas of augmented Reality (Augmented Reality, AR), virtual Reality (VR), autopilot, multimedia and video analytics, etc., which can improve the driving experience of drivers and passengers. However, correspondingly, new applications are increasingly demanding in terms of low latency communication, computation, and buffering resources. Although cloud computing network architectures may provide computing and caching resources for vehicles over a centralized network, low latency requirements for vehicle network applications cannot be met due to the long communication links. The on-board edge Computing (VEC) is one of mobile edge Computing (Mobile Edge Computing, MEC), and as a distributed network architecture, the content and functions of the centralized network can be distributed to the edge side, shortening the transmission path between the vehicle and the server, and thus reducing the processing delay of the user. Thus, the VEC can act as an efficient framework for handling vehicle requests under the vehicle network.
The calculation unloading refers to that a vehicle terminal generates a task, the task is unloaded to other equipment for processing, and then a calculation result is returned. The computing resource allocation refers to the problem that the server needs to process multiple tasks in parallel, because multiple tasks may be offloaded to the same server, which involves CPU frequency allocation. When only a single vehicle initiates a task, an unloading scheme is easier to calculate, but when a plurality of vehicles initiate heterogeneous requests, how to design an excellent calculation unloading strategy and a calculation resource allocation strategy to optimize the overall delay of the system is a technical problem to be solved urgently at present.
Referring to fig. 1, an embodiment of the present application provides a method for computing and offloading and resource allocation based on the internet of vehicles, the method for computing and offloading and resource allocation based on the internet of vehicles including:
Step S110, if a plurality of vehicles in the vehicle-mounted edge computing system request task unloading, setting an unloading strategy of the vehicle and an iteration ending condition of a resource allocation strategy of an edge server; the iteration end condition includes at least: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle that is located within the coverage of edge server.
Step S120, executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
according to a preset unloading method, an unloading strategy of the current wheel iteration is obtained, wherein the unloading strategy comprises the steps of determining an unloading object corresponding to each vehicle in a plurality of vehicles; the preset unloading method is to calculate the unloading strategy by adopting the cooperative game theory to minimize task completion delay as a target according to the resource allocation strategy of the previous iteration.
Obtaining a resource allocation strategy of the current round of iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration.
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition; the iteration ends and if the iteration end condition is not met, the next iteration is started.
According to the method, if a plurality of vehicles initiate heterogeneous tasks, on one hand, the unloading strategy of each round is calculated with the aim of minimizing task completion delay of the heterogeneous tasks according to the guidance of a cooperative game theory, on the other hand, the resource allocation strategy of each round is calculated with the aim of minimizing task completion delay of the heterogeneous tasks by using a DDPG method, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front-back two-round iteration is smaller than a preset value, the obtained final unloading strategy and the final resource allocation strategy can furthest reduce the delay of completing the heterogeneous tasks generated by a plurality of vehicles at the same time, and the overall performance of a vehicle edge computing system is improved.
For ease of understanding, a detailed embodiment is provided:
Referring to fig. 2, the on-board edge computing system has a three-tier architecture, a cloud, an edge server, and a vehicle. If there is The vehicles run under the edge server to form a set/>. Wherein. Use/>Representing edge servers,/>Representing vehicles under edge server communication range coverage,/>Representing a cloud server. Let's assume at/>Time of day, there is/>When a computing task request is initiated by a vehicle, the vehicle may choose to locally compute, offload to an edge server, offload to another vehicle, or offload to a cloud server. Before introducing the offloading policy and the resource allocation policy of the present embodiment, examples of one of a motion model, a communication model, and a calculation model of the on-vehicle edge calculation system are described below:
A motion model;
the distance change formula between the vehicle node-vehicle node, the vehicle node-edge server node over time can be estimated due to predictability of vehicle motion. According to the wireless communication theory and the wireless communication model, the uplink and downlink transmission rates between the nodes can be calculated a priori, and a foundation is made for subsequent work.
According toTime position information prediction/>And constructing a driving track prediction model by the position information of the moment. Assuming that the vehicles all run at a uniform speed in one direction, establishing one-way multi-lane two-dimensional coordinates according to the parallel roads of the running directions of the vehicles, and assuming that the vehiclesAt/>Moment initiates a calculation task request, vehicle/>At/>Time two-dimensional coordinates are/>Vehicle/>At/>The two-dimensional coordinates of the moment are:
Wherein the method comprises the steps of Representing vehicle/>At/>Speed of time; /(I)The function is used to generate random integers, i.e./>The lane change is only allowed once at most in time; /(I)Indicating the lane width. Then the current/>Moment initiation request vehicle/>Euclidean distance to edge servers.
A communication model;
Since data transmission involves communication between vehicles, between vehicles and edge servers, and between edge servers and vehicles, it is necessary to determine the communication rate of data exchange between nodes.
Considering the difference between V2V (Vehicle to Vehicle) and V2I (Vehicle-to-information) communications, and the difference between uplink and downlink, the Vehicle-to-edge server and Vehicle-to-Vehicle communication rates are defined as follows:
initiating vehicle for computing task requests And node (vehicle)/>At/>The transmission rate (V2V) between them is:
since a vehicle can upload a cache resource as a relay through an edge server when no resources are available to a nearby vehicle, and can offload a calculation task to the edge server through V2I communication, communication rates between the vehicle and the edge server and between the edge server and the vehicle are expressed as follows:
wherein, And/>Representing the bandwidths of the V2V and V2I channels, respectively. /(I)Channel fading factor representing the uplink,/>Representing the path loss factor,/>Representing the Euclidean distance between the vehicle and the edge server,/>Representing gaussian noise density of a communication system,/>Representing the transmission power of the vehicle,/>Representing the transmission power of the edge server.
And the edge server and the cloud server are communicated through a port. Assuming that the communication rate between the cloud and the edge server is stable, not over timeThe change is denoted as/>
Further, the communication radii of the vehicle and the edge server are respectively expressed asAnd/>
Computational models (showing task completion delays when a vehicle initiates various computational task requests);
in the case of a vehicle initiating a task request, the calculation result is assumed to be small, irrespective of the delay in the transmission of the result. Consider the case where a vehicle task is calculated on multiple nodes: local computing, offloading to edge servers, offloading to cloud servers, and offloading to nearby vehicles.
UsingRepresenting vehicle/>At time/>Initiating task request task/>Delay in time.
Case (1): when the vehicle isWhen the local calculation is executed, the delay time required by the task calculation comprises the calculation time of the task and the delay time for calling the vehicle cache resource, and the calculation formula is as follows:
wherein, Representation and vehicle/>Associated node/>Computing power of,/>Representing a delay time for invoking a vehicle cache resource,/>Representing the size of the task,/>Representing the complexity of the task computation. The cache resource is a tool for solving task calculation, and can be set at a corresponding node in advance to reduce the calling time, if the selected unloading object does not have the cache resource after the task is initiated, the resource needs to be called, and the delay time for calling the vehicle cache resource can occur.
Case (2): when the vehicle isWhen offloading tasks to edge servers, i.e. selecting to serve node/>And (5) performing calculation.
The latency required to offload task input data to an edge server, the latency to invoke the cache resources required for the task, and the computational latency of the edge server. According to the caching model described above. The total delay offloaded to the edge server is expressed as:
wherein, Representing allocation to edge server nodes/>Vehicle/>Computing resources of (1), variables/>Indicating whether the edge server is cached as a request/>Required resources,/>Representing vehicle/>At/>Time-initiated task/>The cache call of the edge server is delayed.
Case (3): when the vehicle isOffloading tasks to cloud servers/>The delay required at this time includes the time of uploading the input data to the cloud server by the edge server and the computation time of the cloud server.
Wherein,Representing the computing power allocated to each task in the cloud server.
Case (4): when the vehicle isThe task is offloaded to a nearby vehicle. Since the computing power of the vehicles is assumed to be the same, the unloading decision need not take into account the computing power differences between the vehicles. When considering offloading tasks to other vehicles, consider only vehicle/>Vehicles within one-hop range of the V2V communication range all have cache resources (cache resources are tools for solving task calculations). The delay required for unloading is calculated as follows:
in combination with all of the above, delay of vehicle Expressed as:
,
Wherein a decision variable is set Representing vehicle/>Whether or not to be/>Task offloading to compute node/>. When/>When, vehicle/>Selecting to offload tasks to node/>
After introducing the motion model, the communication model and the calculation model described above, the present embodiment is used to calculate the final offloading policy and the resource allocation policy by an iterative method of offloading policy and resource allocation (see the pseudo code of the subsequent method 2 for details), where the offloading policy is calculated based on the cooperative game theory, and the resource allocation policy is calculated based on DDPG (i.e. reinforcement learning) mode.
When a vehicle will initiate a computational task request, it sends task related information and its state information to the edge server in a compact header file format. The edge server collects mission information from a plurality of vehicles at a current time and designs an offloading scheme for the vehicle edge computing system based on the collected information. Thus, the computational offload problem of a multitasking vehicle can be conceptualized as a fully informed static cooperative gaming problem. For vehicles initiating computing tasks, policies are set(Offloading policy) aimed at reducing the overall computational delay of the system.
When a plurality of vehicles in the vehicle-mounted edge computing system initiate tasks at the same moment, in order to reduce the overall delay of the multitasking vehicles as much as possible, when one vehicle is about to initiate a computing task request, the related information of the vehicle for initiating the computing task and the vehicle position information can be sent to an edge server, the task information of the plurality of vehicles at the current moment is collected by the edge server, and the unloading strategy of the system is formulated. Modeling a multitasking vehicle computing offload problem as a complete information static cooperative game problem as follows:
A vehicle initiating a computing task request: at each time Define a set/>Representing a vehicle initiating a computing task request. A common goal between vehicles initiating computing task requests is to shorten the task completion delay in completing all vehicle computing task requests throughout the on-board edge computing system.
Calculating an unloading strategy: vehicle that will initiate a computing task requestThe set of policies of available offload objects is represented asAt/>Each of/>Can be from the respective policy set/>Selecting an offloading policy/>Wherein/>Each policy/>All represent/>Selection of an offload object. Thus,/>Policy set/>From the/>Node/>, which is selectable for computationComposition is prepared. After the game decision system unloads decisions,/>Final strategy/>Will be determined to yield an offloading policy/>
Benefit function: the embodiment is provided with the effective function in the cooperative game, and aims to reduce task completion delay of the vehicle-mounted edge computing system aiming at computing task requests to the greatest extent. The utility function is expressed as:
,
wherein, Represents benefit, by/>The overall benefits of the vehicle edge computing system are used as the benefits of each individual in the cooperative game, and when the overall delay of the vehicle edge computing system is smaller, the benefits of the vehicle edge computing system are higher, and the corresponding individual benefits are increased. In a complete information collaborative game, within a vehicle edge computing system/>Negotiations are performed to determine the offload object that maximizes the utility function. This is done to create a collective and rational environment for each/>Tend to pursue the greatest collective benefit. Furthermore, situations may arise where a person's irrational behavior, i.e., a vehicle user may change his strategy to reduce the overall task latency of the system, and even unfortunately increase his personal computing latency (within acceptable latency thresholds).
At the position ofWhen, to perfect/>And optimizing the search space, reducing the range of search parameters, and reducing the time required for finding the optimal solution of the game. Put forward a strict policy of avoiding inferior quality, namely pair/>Given policy/>If (3)Then one can never/>Culling strategy/>. The following is the pseudo code of the offloading method of calculating the offloading policy provided in this embodiment:
method 1: unloading method under guidance of cooperative game theory
Input:
the state of the vehicle edge computing system, at Starting a calculation task request;
System cache decisions (advanced settings);
Resource allocation policy (See the examples which follow for details);
And (3) outputting:
Offloading policy
1 Initializing a list[/>,/>, ... ,/>, ... ];
2 AtWhen limiting the range of vehicle communication to/>, based on the cache limitSelecting a set of available computing offload objects, i.e. initialization/>Policy set/>[/>,/>, ...,/>In/>All/>
Initializing utility function
And 4, strictly stopping an inferior strategy:
5:FOR eachin/>
6:FOR Policy set/> In/>
7 RemovingGenerating a residual policy set/>
8:TRUE;
9:FOR eachin/>
10:FOR eachin/>'s strategy set/>
Recursive generation of policy combinations,/>
12:IF
13:FALSE;
14:BREAK
15:ENDIF
16: ENDFOR
17:ENDFOR
18: IF
19: Removefrom/>
20:ENDIF
21:ENDFOR
22:ENDFOR
23, A complete uninstalled object set of RETURN;
24, negotiation algorithm:
25 optimal comprehensive utility
26 Best policy combination
27:FORin/>do;
28:FORin/>do;
29 Other policy combinationsRecursive selection of policy combinations/>
30 Current Total utility
31:IFthen;
32:
33:
34:ENDIF
35:ENDFOR
36:ENDFOR
37: RETURNupdated by/>
The following is an illustration of pseudo code:
in the first step (line 1 to line 3), the available offload objects are initialized according to the communication range of the vehicle and the deployed buffer resource conditions.
Second step (line 4 to line 23): simplification using strict culling inferior policiesThe core ideas are as follows: when a certain vehicle initiates a strategy, no matter what strategy is adopted by other vehicles initiating tasks, the strategy is free from high benefits brought by other strategies of the vehicle, and can be a bad strategy to be removed.
Third step (25 th to 37 th lines), the vehicle edge computing system recursively selectsAnd (3) recursively generating strategy sets of all vehicles participating in the calculation task, and calculating the strategy set with the biggest profit.
In view of the offloading policy a determined in the cooperative game described above, it is possible to offload multiple computing task requests to an edge server. The edge server has the ability to allocate computing resources for different tasks, which can be seen as a continuous action space due to its computing resource's separability. The computing resource allocation optimization problem is translated into a Markov Decision Process (MDP) and DDPG is employed to formulate a computing resource allocation policy. The present embodiment sets a status space, an action space, and a bonus space by:
State space: when an edge server receives multiple tasks simultaneously, to model them as MDPs, the tasks are offloaded to the vehicle edge computing system And build them into a task queue with a time order. Wherein, task queue/>With its original eigenvalues/>. This transformation allows formulating a state sequence/>, within the MDP frameworkAnd makes decisions according to the task characteristics of each step in the sequence, and in addition, the computing resource size of the edge server also affects the allocation policy. Determining input stateIn/>And according to the determined task queue with the time sequence and the residual computing resources of the edge server.
Action space: when a task is offloaded to an edge server, the edge server may allocate the maximum remaining capacity of computing resources for the task offloaded to the edge server. To output the corresponding computing resource size, an allocation action is formulated for the remaining computing resources using the Actor network. Wherein/>Representing the allocation proportion of the remaining servers, one can use/>Expressed in state/>The size of the computing resources allocated to the task. Thus, the state transition matrix can be expressed as:
bonus space: for each task, once the assigned computing resources are determined, the computation delay for that task may be computed. Since the primary goal is to minimize overall system latency, task completion latency can be treated as an immediate reward. To speed up convergence, the instant prize is set to:
Wherein the prize is the ratio between the delay variance and the dispense frequency variance. When the benefit generated by the optimization delay is greater than the optimal allocation frequency, the bonus function is increased, indicating Action is/>Better performance under the condition.
DDPG is an existing method, and the implementation of this embodiment includes designing Reset, step, and so on functions of DDPG network corresponding to the state space, action space, and rewards space of the design, and the specific workflow of DDPG is as follows:
(1) And (3) sampling: the method comprises the steps of randomly taking a state S, inputting the state S into an Actor current network, selecting an action A according to the state S by the Actor current network, inputting the action A into an Environment, outputting a corresponding reward R and a next state S 'by the Environment, forming five-tuple transitions (S, A, R, S', done), simultaneously putting the five-tuple into an experience playback pool, judging the number of data in the experience playback pool once every time when one transition data is put, executing a training process if the pool is full (or reaches a preset threshold), and otherwise, continuously executing a sampling process; the next state S ' is input into the Actor target network to select an action A ' corresponding to the next state S ', and the action A ' is reserved for the Critic target network in the training process to calculate the Q ' value;
(2) Training process: n pieces of data are taken out from the pool to be decompressed, and the data which are grouped are respectively transmitted to a Critic current network and a Critic target network;
(3) Parameter updating flow: the network parameters are updated every round in a soft update manner.
By utilizing the unloading method and the resource allocation method, a double-layer optimization iteration method is provided, and an enhanced calculation unloading strategy and resource allocation strategy are obtained. The improvement offloading policy and the resource allocation policy are iterated in an alternating manner until convergence is reached. At initialization, the average allocation policy may be used as an initial resource allocation policy and an initial offloading policy.
Method 2: iterative optimization method for unloading strategy and resource allocation strategy
Input:
Initializing computing resource allocation policies
Initializing offload policies
Delay of overall completion of initialization tasks,/>
Number of iterations
Iteration threshold
And (3) outputting:
optimized ,/>
Task completion delay when a vehicle initiates a computational task request
1: FORto/>do;
Updating an unloading strategy;
obtaining an offload policy using the "offload method" described above
4 Update offload policies(I.e./>)Assignment to/>);
Updating a resource allocation strategy;
obtaining a computing resource allocation policy using the "resource allocation method" described above
Updating the resource allocation policy of the last iteration
8, Checking termination conditions;
9 according to And/>Calculate overall completion delay/>
10: IFOr relative growth/>
11:BREAK
12:ENDIF
13 Update
14 Return FinalFinal/>Final/>
The explanation of the pseudocode is as follows:
And initializing a network for computing resource allocation at the initial stage, making a computing unloading decision on the basis, obtaining the delay of computing completion through the computing unloading strategy, and then on the premise of determining the computing unloading strategy, outputting new computing completion delay through updating the network for computing resource allocation strategy, wherein the condition of iteration completion is that the difference value of the two delays is smaller than a threshold value, and the convergence can be judged.
According to the method, the unloading strategy of the vehicle and the resource allocation strategy of the edge server are simultaneously optimized, the unloading strategy of each round is calculated according to the cooperative game theory and with the task completion delay of the minimum heterogeneous task as a target, the DDPG method calculates the resource allocation strategy of each round with the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of completing heterogeneous tasks generated by a plurality of vehicles simultaneously, and the overall performance of a vehicle edge computing system is improved.
An embodiment of the present application provides a computing unloading and resource allocation system based on the internet of vehicles, including:
A request acquisition unit, configured to set an unloading policy of a vehicle and an iteration end condition of a resource allocation policy of an edge server if a plurality of vehicles in a vehicle-mounted edge computing system request task unloading; the iteration end condition includes at least: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of edge server;
The iteration calculation unit is used for executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
according to a preset unloading method, an unloading strategy of the current wheel iteration is obtained, wherein the unloading strategy comprises the steps of determining an unloading object corresponding to each vehicle in a plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to the resource allocation strategy of the previous iteration;
Obtaining a resource allocation strategy of the current round of iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method is to adopt DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition; the iteration ends and if the iteration end condition is not met, the next iteration is started.
It should be noted that, the computing and unloading and resource allocation system based on the internet of vehicles provided in this embodiment and the computing and unloading and resource allocation method based on the internet of vehicles described above are based on the same inventive concept, so that the relevant content of the computing and unloading and resource allocation method based on the internet of vehicles described above is also applicable to the content of the computing and unloading and resource allocation system based on the internet of vehicles, and therefore, will not be described herein.
As shown in fig. 3, the embodiment of the present application further provides an electronic device, where the electronic device includes:
At least one memory;
At least one processor;
At least one program;
The programs are stored in the memory, and the processor executes at least one program to implement the disclosed method of computing offloading and resource allocation based on the internet of vehicles described above. According to the method, the unloading strategy of the vehicle and the resource allocation strategy of the edge server are simultaneously optimized, the unloading strategy of each round is calculated according to the cooperative game theory and with the task completion delay of the minimum heterogeneous task as a target, the DDPG method calculates the resource allocation strategy of each round with the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of completing the heterogeneous tasks generated by a plurality of vehicles simultaneously, and the overall performance of the vehicle edge computing system is improved.
The electronic device may be any intelligent terminal including a mobile phone, a tablet computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a vehicle-mounted computer, and the like.
The electronic device according to the embodiment of the application is described in detail below.
The processor 1600 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present invention;
The Memory 1700 may be implemented in the form of Read Only Memory (ROM), static storage, dynamic storage, or random access Memory (Random Access Memory, RAM). Memory 1700 may store an operating system and other application programs, program code that when implementing the techniques provided by embodiments of the present specification by software or firmware is stored in memory 1700 and invoked by processor 1600 to perform the computer offload and resource allocation methods of embodiments of the present invention based on the Internet of vehicles.
An input/output interface 1800 for implementing information input and output;
The communication interface 1900 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (such as USB, network cable, etc.), or can realize communication in a wireless manner (such as mobile network, WIFI, bluetooth, etc.);
bus 2000, which transfers information between the various components of the device (e.g., processor 1600, memory 1700, input/output interface 1800, and communication interface 1900);
Wherein processor 1600, memory 1700, input/output interface 1800, and communication interface 1900 enable communication connections within the device between each other via bus 2000.
The embodiment of the invention also provides a storage medium which is a computer readable storage medium, wherein the computer readable storage medium stores computer executable instructions which are used for enabling a computer to execute the computer unloading and resource allocation method based on the internet of vehicles. According to the method, the unloading strategy of the vehicle and the resource allocation strategy of the edge server are simultaneously optimized, the unloading strategy of each round is calculated according to the cooperative game theory and with the task completion delay of the minimum heterogeneous task as a target, the DDPG method calculates the resource allocation strategy of each round with the task completion delay of the minimum heterogeneous task as a target, and if the difference between the task completion delays of the unloading strategy and the resource allocation strategy of the front and rear iteration is smaller than a preset value, the obtained final unloading strategy and final resource allocation strategy can furthest reduce the delay of completing the heterogeneous tasks generated by a plurality of vehicles simultaneously, and the overall performance of the vehicle edge computing system is improved.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device.
In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiments described in the present invention are for more clearly describing the technical solutions of the embodiments of the present invention, and do not constitute a limitation on the technical solutions provided by the embodiments of the present invention, and those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present invention are applicable to similar technical problems.
It will be appreciated by persons skilled in the art that the embodiments of the invention are not limited by the illustrations, and that more or fewer steps than those shown may be included, or certain steps may be combined, or different steps may be included.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including multiple instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing a program.
While the preferred embodiments of the present application have been described in detail, the embodiments of the present application are not limited to the above-described embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the embodiments of the present application, and these equivalent modifications or substitutions are included in the scope of the embodiments of the present application as defined in the appended claims.

Claims (8)

1. The method for computing and unloading and resource allocation based on the Internet of vehicles is characterized by comprising the following steps of:
If a plurality of vehicles in the vehicle-mounted edge computing system request task unloading, setting an unloading strategy of the vehicle and an iteration ending condition of a resource allocation strategy of an edge server; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
executing an iteration process until an unloading strategy of the current round of iteration and a resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
Obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration; the obtaining the unloading strategy of the current round iteration according to the preset unloading method comprises the following steps:
Determining available unloading objects of each vehicle according to the resource allocation strategy of the previous iteration and the system state of the vehicle-mounted edge computing system; recursively generating unloading object sets of the vehicles from available unloading objects of each vehicle, and calculating the unloading object set with the biggest profit as an unloading strategy of the current round iteration; the benefit is related to task completion delays generated by the fact that the vehicles respectively select corresponding unloading objects;
Before recursively generating the unloading object set of the vehicle, the internet of vehicles-based computing unloading and resource allocation method further includes:
Calculating a first benefit obtained when a first unloading object is selected by a first vehicle; the first vehicle is any one of the plurality of vehicles, the first unloading object is any one of all available unloading objects of the first vehicle, and the first benefit is a benefit of the plurality of vehicles when the first vehicle selects the first unloading object; if the first benefit is less than the second benefit, rejecting a policy that the first vehicle selects the first offload object; the second benefit is a benefit of the plurality of vehicles when the first vehicle selects any one of the available offload objects other than the first offload object;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
2. The internet of vehicles-based computing offload and resource allocation method of claim 1, wherein the benefit is represented by the following function:
wherein, Representing vehicle/>At time/>The completion of the task that initiated the task is delayed.
3. The internet of vehicles-based computing offload and resource allocation method of claim 1, wherein the state space in DDPG comprises:
wherein, Representing the size of the task,/>Representing the computational complexity of a task,/>Representing the computing resources remaining by the edge server.
4. The internet of vehicles based computing offload and resource allocation method of claim 3, wherein the bonus space in DDPG comprises:
wherein, Representing task completion delays under average allocation,/>Representing task completion delays for computing tasks according to DDPG,/>Representing edge server in State/>Computing resources allocated to tasks,/>Representing computing resources that the edge server bisects into tasks.
5. The internet of vehicles-based computing offload and resource allocation method of claim 1, wherein the iteration end condition further comprises a maximum number of iterations;
And if the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration comprises:
If the difference between the task completion delay of the unloading strategy and the resource allocation strategy of the current iteration and the task completion delay of the unloading strategy and the resource allocation strategy of the previous iteration is smaller than a preset value, or if the current iteration number reaches the maximum iteration number, the iteration is ended.
6. A computing offload and resource allocation system based on the internet of vehicles, the computing offload and resource allocation system based on the internet of vehicles comprising:
A request acquisition unit, configured to set an unloading policy of a vehicle and an iteration end condition of a resource allocation policy of an edge server if a plurality of vehicles in a vehicle-mounted edge computing system request task unloading; the iteration end condition at least includes: the difference between task completion delays of the unloading strategy and the resource allocation strategy of the front and back iteration is smaller than a preset value; the in-vehicle edge computing system includes at least: cloud server, edge server, and vehicle in coverage area of the edge server;
The iteration calculation unit is used for executing an iteration process until the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration which meet the iteration ending condition are used as a final unloading strategy and a final resource allocation strategy; wherein each round of iterative process comprises:
obtaining an unloading strategy of the current wheel iteration according to a preset unloading method, wherein the unloading strategy comprises determining an unloading object corresponding to each vehicle in the plurality of vehicles; the preset unloading method is to calculate an unloading strategy by adopting a cooperative game theory to minimize task completion delay as a target according to a resource allocation strategy of the previous iteration;
the obtaining the unloading strategy of the current round iteration according to the preset unloading method comprises the following steps:
Determining available unloading objects of each vehicle according to the resource allocation strategy of the previous iteration and the system state of the vehicle-mounted edge computing system; recursively generating unloading object sets of the vehicles from available unloading objects of each vehicle, and calculating the unloading object set with the biggest profit as an unloading strategy of the current round iteration; the benefit is related to task completion delays generated by the fact that the vehicles respectively select corresponding unloading objects;
Before recursively generating the unloading object set of the vehicle, the internet of vehicles-based computing unloading and resource allocation method further includes:
Calculating a first benefit obtained when a first unloading object is selected by a first vehicle; the first vehicle is any one of the plurality of vehicles, the first unloading object is any one of all available unloading objects of the first vehicle, and the first benefit is a benefit of the plurality of vehicles when the first vehicle selects the first unloading object; if the first benefit is less than the second benefit, rejecting a policy that the first vehicle selects the first offload object; the second benefit is a benefit of the plurality of vehicles when the first vehicle selects any one of the available offload objects other than the first offload object;
Obtaining a resource allocation strategy of current round iteration according to a preset resource allocation method, wherein the resource allocation strategy comprises determining computing resources corresponding to the allocation of the edge server to the required vehicle; the preset resource allocation method adopts DDPG to minimize task completion delay as target computing resource allocation according to the unloading strategy of the current round of iteration;
If the task completion delay of the unloading strategy of the current round of iteration and the resource allocation strategy of the current round of iteration meets the iteration ending condition, ending the iteration; and if the iteration ending condition is not met, starting the next iteration.
7. An electronic device, comprising: at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the internet of vehicles-based computing offload and resource allocation method of any one of claims 1-5.
8. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the internet of vehicles-based computing offload and resource allocation method of any one of claims 1 to 5.
CN202410229713.9A 2024-02-29 2024-02-29 Computing unloading and resource allocation method and system based on Internet of vehicles Active CN117891613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410229713.9A CN117891613B (en) 2024-02-29 2024-02-29 Computing unloading and resource allocation method and system based on Internet of vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410229713.9A CN117891613B (en) 2024-02-29 2024-02-29 Computing unloading and resource allocation method and system based on Internet of vehicles

Publications (2)

Publication Number Publication Date
CN117891613A CN117891613A (en) 2024-04-16
CN117891613B true CN117891613B (en) 2024-05-31

Family

ID=90641322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410229713.9A Active CN117891613B (en) 2024-02-29 2024-02-29 Computing unloading and resource allocation method and system based on Internet of vehicles

Country Status (1)

Country Link
CN (1) CN117891613B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941667A (en) * 2019-11-07 2020-03-31 北京科技大学 Method and system for calculating and unloading in mobile edge calculation network
CN112685163A (en) * 2021-01-06 2021-04-20 北京信息科技大学 Computing unloading method based on mobile edge computing and mobile edge computing server
CN113543074A (en) * 2021-06-15 2021-10-22 南京航空航天大学 Joint computing migration and resource allocation method based on vehicle-road cloud cooperation
CN113821270A (en) * 2021-07-29 2021-12-21 长沙理工大学 Task unloading sequence prediction method, decision-making method, electronic device and storage medium
CN114745389A (en) * 2022-05-19 2022-07-12 电子科技大学 Computing offloading method for mobile edge computing system
CN115696452A (en) * 2022-10-21 2023-02-03 云南大学 Game method for joint optimization of unloading decision and resource allocation in cloud-edge cooperative computing
WO2023040022A1 (en) * 2021-09-17 2023-03-23 重庆邮电大学 Computing and network collaboration-based distributed computation offloading method in random network
WO2023160012A1 (en) * 2022-02-25 2023-08-31 南京信息工程大学 Unmanned aerial vehicle assisted edge computing method for random inspection of power grid line

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941667A (en) * 2019-11-07 2020-03-31 北京科技大学 Method and system for calculating and unloading in mobile edge calculation network
CN112685163A (en) * 2021-01-06 2021-04-20 北京信息科技大学 Computing unloading method based on mobile edge computing and mobile edge computing server
CN113543074A (en) * 2021-06-15 2021-10-22 南京航空航天大学 Joint computing migration and resource allocation method based on vehicle-road cloud cooperation
CN113821270A (en) * 2021-07-29 2021-12-21 长沙理工大学 Task unloading sequence prediction method, decision-making method, electronic device and storage medium
WO2023040022A1 (en) * 2021-09-17 2023-03-23 重庆邮电大学 Computing and network collaboration-based distributed computation offloading method in random network
WO2023160012A1 (en) * 2022-02-25 2023-08-31 南京信息工程大学 Unmanned aerial vehicle assisted edge computing method for random inspection of power grid line
CN114745389A (en) * 2022-05-19 2022-07-12 电子科技大学 Computing offloading method for mobile edge computing system
CN115696452A (en) * 2022-10-21 2023-02-03 云南大学 Game method for joint optimization of unloading decision and resource allocation in cloud-edge cooperative computing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Computation Offloading Algorithm Based on Game Theory for Vehicular Edge Networks;Yujiong Liu 等;2018 IEEE International Conference on Communications (ICC);20180730;第1-6页 *
V2X 多节点协同分布式卸载策略;曹敦 等;通信学报;20220228;第185-194页 *

Also Published As

Publication number Publication date
CN117891613A (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN109669768B (en) Resource allocation and task scheduling method for edge cloud combined architecture
CN111405569A (en) Calculation unloading and resource allocation method and device based on deep reinforcement learning
CN110662238B (en) Reinforced learning scheduling method and device for burst request under edge network
CN111835827A (en) Internet of things edge computing task unloading method and system
CN114340016B (en) Power grid edge calculation unloading distribution method and system
CN113064671A (en) Multi-agent-based edge cloud extensible task unloading method
CN111176820A (en) Deep neural network-based edge computing task allocation method and device
CN113760511B (en) Vehicle edge calculation task unloading method based on depth certainty strategy
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN113626104B (en) Multi-objective optimization unloading strategy based on deep reinforcement learning under edge cloud architecture
CN111711666A (en) Internet of vehicles cloud computing resource optimization method based on reinforcement learning
CN115297171B (en) Edge computing and unloading method and system for hierarchical decision of cellular Internet of vehicles
CN112469001A (en) Application migration method and device, electronic equipment and storage medium
CN114205791A (en) Depth Q learning-based social perception D2D collaborative caching method
CN113821270B (en) Task unloading sequence prediction method, decision method, electronic device and storage medium
CN116390125A (en) Industrial Internet of things cloud edge cooperative unloading and resource allocation method based on DDPG-D3QN
CN116339849A (en) Multi-user multi-task computing unloading method and system in mobile edge computing environment
CN114970834A (en) Task allocation method and device and electronic equipment
CN116233927A (en) Load-aware computing unloading energy-saving optimization method in mobile edge computing
CN113747507B (en) 5G ultra-dense network-oriented computing resource management method and device
CN116489712A (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN117891613B (en) Computing unloading and resource allocation method and system based on Internet of vehicles
Yu et al. Virtual reality in metaverse over wireless networks with user-centered deep reinforcement learning
CN114815755A (en) Method for establishing distributed real-time intelligent monitoring system based on intelligent cooperative reasoning
CN114828047A (en) Multi-agent collaborative computing unloading method in 5G mobile edge computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant