CN116261119A - Intelligent collaborative task calculation and on-demand resource allocation method in vehicle-mounted environment - Google Patents
Intelligent collaborative task calculation and on-demand resource allocation method in vehicle-mounted environment Download PDFInfo
- Publication number
- CN116261119A CN116261119A CN202310109857.6A CN202310109857A CN116261119A CN 116261119 A CN116261119 A CN 116261119A CN 202310109857 A CN202310109857 A CN 202310109857A CN 116261119 A CN116261119 A CN 116261119A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- task
- vehicles
- computing
- service
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013468 resource allocation Methods 0.000 title claims abstract description 30
- 238000004364 calculation method Methods 0.000 title claims description 34
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 29
- 238000005457 optimization Methods 0.000 claims abstract description 11
- 238000007726 management method Methods 0.000 claims abstract description 6
- 230000001934 delay Effects 0.000 claims abstract description 5
- 230000005540 biological transmission Effects 0.000 claims description 27
- 238000004891 communication Methods 0.000 claims description 26
- 230000009471 action Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 15
- 230000005012 migration Effects 0.000 claims description 14
- 238000013508 migration Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 4
- 235000008694 Humulus lupulus Nutrition 0.000 claims description 3
- 235000006679 Mentha X verticillata Nutrition 0.000 claims description 2
- 235000002899 Mentha suaveolens Nutrition 0.000 claims description 2
- 235000001636 Mentha x rotundifolia Nutrition 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 claims description 2
- 230000007704 transition Effects 0.000 claims description 2
- 238000013461 design Methods 0.000 claims 1
- 230000002093 peripheral effect Effects 0.000 claims 1
- 238000009412 basement excavation Methods 0.000 abstract 1
- 239000003795 chemical substances by application Substances 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/40—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
- H04W4/44—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/0231—Traffic management, e.g. flow control or congestion control based on communication conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/40—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
- H04W4/46—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention belongs to the technical field of Internet of vehicles, and discloses an intelligent collaborative task computing and on-demand resource allocation method in a vehicle-mounted environment. The collaborative task computation and on-demand resource allocation of the present invention is expanded by the following aspects: characterizing connectivity by using connection time of links between vehicles and infrastructure, and designing a vehicle-to-vehicle unloading mechanism to support the excavation of available vehicle resources and the vehicle-to-infrastructure unloading mechanism to guide the management of roadside facility resources; establishing a user utility model according to delays generated by tasks at different execution positions so as to realize comprehensive coordination of computing resources; the optimization problem aiming at minimizing the user utility is presented, and flexible scheduling and on-demand network resource allocation between end-side-clouds are realized based on an asynchronous dominant actor-reviewer algorithm so as to ensure the personalized service demands of different users.
Description
Technical Field
The invention belongs to the technical field of Internet of vehicles, and particularly relates to an intelligent collaborative task computing and on-demand resource allocation method in a vehicle-mounted environment.
Background
As a key driver of intelligent transportation systems, the internet of vehicles has entered a new period of high development. As a result of this trend, a large number of in-vehicle applications have penetrated into different aspects of our lives, from driving safety, in-vehicle entertainment to traffic efficiency. These applications involve a large number of diverse tasks, resulting in an exponential growth of traffic data, which inevitably places great stress on network burden and execution time. For vehicles with limited resources, it is quite difficult to deal with this problem. Due to the abundance of computing resources, traditional cloud computing is naturally considered an option to perform various tasks.
However, since the deployment location of the cloud is typically far from the user, this can introduce unpredictable delays and jitter. The vehicle-mounted edge computing can realize lower energy consumption and faster task response by sinking cloud computing capability near a user, and is a more effective novel computing paradigm. In the vehicle-mounted edge calculation, the calculation resources are widely distributed in cloud layers, edge layers and terminal layers, and the cloud layers, the edge layers and the terminal layers have ubiquity, isomerism and dynamic properties. This allows the vehicle to offload its tasks to the cloud, edge servers, and neighboring vehicles for computation according to service requirements.
Cloud computing is suitable for latency tolerant tasks, while latency sensitive tasks can be offloaded to roadside units (Road Service Unit, RSU) with relatively rich computing and storage resources, i.e., vehicle-infrastructure (Vehicle to Infrastructure, V2I) offloaded. However, the limited computing power of roadside units exacerbates resource competition, and efficient resource allocation is critical to ensure the service needs of these users.
V2I offloading is popular in on-board edge computing, but there are several limitations: deployment overhead makes it difficult to densely deploy edge servers along a path; the increasing number of vehicles within the communication range of roadside units makes it difficult for the roadside units to simultaneously satisfy all of the business requirements from different vehicles.
Today, vehicles will become more and more intelligent, and intelligent vehicles will be equipped with certain computing and buffering resources in addition to communication capabilities. The popularity of intelligent vehicles provides a great opportunity to increase the edge capability of an on-board network, by integrating a large number of idle vehicle resources, vehicles can be served as edge service nodes, i.e., vehicle-to-vehicle (Vehicle to Vehicle, V2V) offloads.
In the prior art, chinese patent publication No. CN110134507a discloses a cooperative computing method under an edge computing system on day 8 and 16 in 2019, in which an optimal configuration of computing resources is achieved by utilizing a scheme of cooperative computing with a plurality of terminals on an edge server side, but storage and computing capabilities of the edge server are limited, all types of services cannot be deployed, an unreasonable task migration policy can lead to an extended processing time of an edge computing task, experience of a vehicle-mounted terminal is degraded, and an unstable problem of the edge server due to overload is also caused.
Combining V2I offloading and V2V offloading is a promising calculation paradigm that helps to improve resource utilization, but there are several issues to be resolved:
(1) Due to the high mobility and frequent topology changes of vehicles, it is a challenge to find available service vehicles and select the best service providing vehicle from them;
(2) The demands of users are various, in this case, how to integrate these resources effectively and implement resource scheduling flexibly, so as to ensure that different user demands are an urgent problem to be solved;
therefore, in order to solve the above problems in the prior art, a method for calculating intelligent collaborative tasks and distributing resources according to needs in a vehicle-mounted environment is needed.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides an intelligent collaborative task computing and on-demand resource allocation method in a vehicle-mounted environment, so as to overcome the technical problems in the prior art; the invention builds a multi-resource arrangement framework aiming at heterogeneous resources widely distributed on an end side, an edge side and a cloud side, wherein the framework can support vehicle-to-vehicle unloading and vehicle-to-infrastructure unloading, the vehicle-to-vehicle unloading allows vehicles with idle resources to participate in task execution, and the vehicle-to-infrastructure unloading can realize load balance; on the basis of the architecture, the task unloading and the resource scheduling are jointly optimized to maximize the system utility and ensure the personalized service demands of different users; for the complexity of the vehicle network, an asynchronous dominant actor-reviewer algorithm is employed to find the optimal scheduling decision.
The technical scheme of the invention is realized as follows: an intelligent collaborative task computing and on-demand resource allocation method in a vehicle-mounted environment comprises the following steps:
according to the running state of the vehicles, calculating to obtain link connectivity between the vehicles and between vehicle facilities by combining the vehicle distance and the communication range;
designing a vehicle-to-vehicle unloading strategy based on link connectivity;
designing a vehicle-to-facility unloading strategy based on link connectivity;
designing a resource management model integrating vehicle-to-vehicle unloading and vehicle-to-facility unloading;
and fifthly, designing an intelligent task calculation and on-demand resource allocation algorithm.
The vehicle network under consideration is formed by a central cloud, along the roadThe road service units and the vehicles are respectively composed of a set of M road service units and N vehicles running on the roadAnd (3) representing.
Each road service unit is equipped with an edge server that is connected to the cloud and communicates with other road service units via wired links. Vehicles connect to the associated road service units using vehicle-to-infrastructure communications and communicate with each other using vehicle-to-vehicle communications.
The vehicle that generates the task to be achieved is defined as a client vehicle.
Each road service unit covers a certain client vehicle and is served by the road service unitThe number and the set of covered client vehicles are respectively N j and />And (3) representing.
The task of client vehicle i is described as { s } i ,c i ,ζ i ,δ i}, wherein si Representing input task size, c i Indicating the CPU cycles, ζ, required to process the task i Representing the delay constraint for performing the task, delta i Indicating task priority.
The task may be handled by vehicle-to-vehicle offloading, vehicle-to-infrastructure offloading, and cloud computing. X is x i0 ,x ij ,x i(M+1) E {0,1} as an offload decision represents whether the task is to be performed by vehicle-to-vehicle offload, vehicle-to-infrastructure offload, or by the cloud, respectively, wherein
Each task can only select one unloading mode, namely: x is x ij =1, where j e {0,1,..m+1 } represents that the task was unloaded by the correspondingAnd (5) carrying out mode processing.
Therefore, the following equation holds
The high degree of movement of the vehicle is prone to intermittent connections, thereby risking the success of the data transmission.
Therefore, link connectivity is a key to ensuring that computation offload is successful;
further, in the first step, the link connectivity may be represented by a link connection time; the vehicles are between the client vehicles and available surrounding vehicles; the vehicle facilities are between the client vehicles and the infrastructure.
Further, in the second step, the vehicle-to-vehicle unloading strategy is designed based on the link connectivity, the link connection time between vehicles is calculated according to the first step, the link connectivity is represented by the link connection time, and finally the vehicle-to-vehicle unloading strategy is designed according to the link connectivity; wherein the vehicle-to-vehicle offload, i.e., the client vehicle offload its tasks to available surrounding vehicles for execution.
In the third step, the vehicle-to-facility unloading strategy is designed based on the link connectivity, the link connection time between the vehicle and the facility is calculated according to the first step, the link connectivity is represented by the link connection time, and finally the vehicle-to-facility unloading strategy is designed according to the link connectivity; wherein the vehicle-to-facility offload, i.e., the client vehicle offload its tasks to the infrastructure for execution.
A free-flow traffic model is considered. The vehicle moves at a constant speed along the x-axis. Assume that vehicle i is in an initial positionAnd at a speed v i Moving in the forward direction.
1) Vehicle-to-vehicle connectivity: for any two adjacent vehicles i and j, when the vehicle is at a distance d between the vehicles at time t ij (t) is less than their communication rangeEnclose R v When they are in communication with each other. Therefore, their link connection time is:
l ij ={mint|d ij (τ)<R v ,0<τ<t};
without loss of generality, assuming vehicle j is in front of vehicle i, their link connection times under different conditions are calculated as follows:
when the vehicles i and j move in the same direction, if v j -v i >0, the link connection time is:
if v j -v i <0, the link connection time is:
when the vehicles i and j move in different directions,
2) Vehicle-to-infrastructure connectivity: assume that the coordinates of the road service unit k associated with the vehicle i are
The distance that vehicle i should travel before leaving the communication range of this road service unit is then:
wherein ,Rr Representing a communication range of the road service unit;
the time required for the vehicle i to leave the communication range of the road service unit k is:
in addition to local processing, the client vehicle may offload its tasks to available surrounding vehicles for execution; this allows more vehicles to participate in the task processing in a distributed manner, thereby improving the resource utilization rate and speeding up the processing.
The available surrounding vehicle use process includes three key steps:
1) Service vehicle discovery: the client vehicle sends service requests to its single-hop and multi-hop vehicles, the service requests including task attributes (e.g., task size) and service requirements (e.g., computational resource and delay constraints), upon receipt of the service requests, each vehicle continually sends the service requests forward until a maximum number of hops is reached; on the other hand, the vehicle determines whether it satisfies a specified criterion, and if so, the vehicle agrees with the service request and serves as a service candidate vehicle by transmitting its status information (such as computing power) to the client vehicle; for any vehicle, it may act as a service vehicle only if the routing path connectivity between the computing result and the client vehicle remains valid until the computing result is successfully sent to the client vehicle.
2) Routing path determination: there may be more than one routing path between a client vehicle and its candidate serving vehicle, typically a routing path consisting of a plurality of one-hop links; because of the mobility of the vehicle, frequent link breaks in the vehicle network are unavoidable, and therefore link connectivity is often considered an important indicator for determining an optimal routing path; the routing path with the greatest connectivity is preferred because it can greatly increase the likelihood of successful delivery of the calculation result from the service vehicle to the customer vehicle.
3) Task allocation policy: the client-side vehicle collects all information related to the candidate service vehicle, and makes an optimal calculation unloading decision according to task service requirements and certain constraint conditions; based on the offloading decision, the task will be sent to the appropriate service vehicle for calculation, and finally, the calculation from the service vehicle will be returned to the customer vehicle.
The transmission rate between two adjacent vehicles i and j at time t is expressed as:
wherein Representing bandwidth; p (P) t Representing the transmit power; g V2V (t) represents the channel gain, which can be expressed as a function of the inter-vehicle distance; and ω is used to represent the noise power.
Due to the mobility of the vehicles, the inter-vehicle distance varies with time, resulting in a time-varying transmission rate;
the average value of the transmission rate is:
given the input task size, the one-hop data transfer time between vehicles i and j is:
for a routing path consisting of u single hop links, i.e., i→1→2→.→u-1→k, the time required for data to be sent from the client vehicle i to the service vehicle k along the multi-hop routing path is given by the following equation:
the time required for the service vehicle k to handle the task offloaded by the client vehicle i is calculated as:
wherein ,fk Indicating a computing power of the service vehicle;
the total time required to complete the task offloading is:
all available candidate service vehicles form a set theta, and vehicles with minimum task execution delay are preferentially selected:
the client vehicle may offload its tasks to an edge server for implementation,
the unloading process mainly comprises task uploading, task execution and result feedback, and the following is specific process content:
1) Task uploading: for vehicle-to-infrastructure offloading, each task vehicle first transmits its task to an associated edge server, with transmission delays related to the bandwidth, distance, etc. allocated by that edge server;
B j bandwidth resources for edge server j; b (B) ij For bandwidth resources allocated to vehicle i;
the data transfer rate between them is then:
wherein ,representing bandwidth, P t Representing the transmit power, w represents the noise power, g V2I (t) represents the channel gain as a function of the distance between vehicles.
The sum of all bandwidth resources allocated to the client vehicle within the edge server j communication range cannot exceed B j :
Due to the mobility of the vehicles, the distance between them varies with time, resulting in a time-varying transmission ratio;
the transmission rate average value is:
the delay required by vehicle i to transmit a task to the associated edge server j is:
2) Task execution: the client vehicle i uploads its task to the edge server j nearby and associated with it, receives the offloaded task, and the edge server j can either process it locally or migrate it to other available edge servers; when the computing power of the edge server is insufficient to provide the service, service migration occurs, so that load balancing can be realized, but additional expenditure is generated; representing k as the edge server selected for performing the task; in the service area covered by the same edge server, different client vehicles compete with each other for competing for limited channel resources; each edge server should reasonably allocate computing resources to accomplish the tasks of different customer vehicles; when servicing multiple tasks, the selected edge server should effectively allocate limited computing resources to the different tasks; will F k Denoted as computing power of edge server k, will f ik Represented as computing resources allocated to task i; thus, the delay required by edge server k to calculate task i is expressed as:
the sum of all computing resources allocated to the user is smaller than the computing power of edge server k:
wherein ,Fj Computing resource denoted edge server j, f ij Representing computing resources allocated to task i for server j;
if k and j are the same edge server, there is no service migration; otherwise, additional service migration time will occur:
wherein ,hij Is the number of hops required for task migration between two edge servers, λ is one-hop migration time;
the total delay of the vehicle-to-infrastructure offload processing task i is expressed as:
3) And (3) result feedback: after the calculation is completed, the selected edge server is responsible for transmitting the calculation result to the client vehicle; since the calculation result size is much smaller than the input data size, the transmission time required to return the calculation result is ignored;
to ensure that the calculation result can be successfully sent to the client vehicle, the total processing delay should not exceed the connection time between the client vehicle and the associated edge server:
based on the actual deployment location of the user, cloud computing may afford delay tolerant applications, while edge computing focuses on providing delay sensitive services;
when the edge capacity is insufficient to meet the user's needs, the cloud can provide the necessary assistance, and the total time the cloud processes the task from the customer vehicle i mainly includes three parts: the transmission time between the client vehicle i and the associated edge server j, i.eThe transmission time between edge server j and cloud, i.e. +.>Time of cloud computing task +.>
User utility may be characterized by its task execution delay:
our goal is to minimize the system-wide user utility, which is defined as the weighted sum of all user utilities.
the resource allocation policies of different edge servers are expressed as:the channel resource scheduling strategies of different edge servers are expressed as:>
then, the following optimization problem is formulated:
δ i to describe the priority of the task;
c1 represents that the task may be handled using vehicle-to-vehicle offloading, vehicle-to-infrastructure offloading, or cloud computing;
c2 ensures that a task can only be handled in one computing entity;
c3 and C4 represent constraints in terms of computational resources and bandwidth resources, respectively;
c5 indicates that the task should be completed successfully before the client vehicle leaves the communication range of its associated edge server.
The above-mentioned optimization problemSolving the challenge with conventional optimization tools, deep reinforcement learning is considered an effective way to solve this problem, while asynchronous dominant actor-reviewer algorithms are extremely advantageous in terms of computational performance.
To implement this algorithm, optimization problems need to be addressedRestated as a Markov decision problem, a state space, an action space and a reward function are determined;
the Markov decision problem consists of arraysDescription of the invention wherein->Representing state space, ++>Representing the action space->Representing transition probability +.>For representing rewards:
1) State space: the state space mainly comprises available computing resources and positions of an edge server, available computing resources and positions of a vehicle, the number and the size of tasks and network bandwidth resources;
2) Action space: the action space comprises an unloading decision and a resource allocation strategy;
3) Rewarding: given a state space and an action space, rewards are designed that minimize the utility of the system according to an objective function.
For the intelligent algorithm, at each point in time t, a state s is observed t Generates an action a t Then obtains a prize r by performing this action t ;
The goal of the intelligent algorithm is to find a strategy that maximizes the jackpot by mapping a state to an action; the global network of the intelligent algorithm is mainly a public neural network model and comprises an actor network and a reviewer network. There are a plurality of learning agents having the same network structure as the global network; each learning agent interacts with the environment independently and without interference; after each learning agent interacts with the environment, calculating the gradient of the neural network loss function of the learning agent, and independently updating the sharing parameters of the global network; at intervals, each agent updates its own neural network parameters using the global network parameters.
The invention has the beneficial effects that:
(1) According to the intelligent collaborative task computing and on-demand resource allocation method in the vehicle-mounted environment, a multi-resource arrangement framework is constructed aiming at heterogeneous resources widely distributed on the end side, the side and the cloud side, the framework can support vehicle-to-vehicle unloading and vehicle-to-infrastructure unloading, wherein the vehicle-to-vehicle unloading allows vehicles with idle resources to participate in task execution, and the vehicle-to-infrastructure unloading can realize load balancing;
(2) On the basis of the architecture, the task unloading and the resource scheduling are jointly optimized to maximize the system utility and ensure the personalized service demands of different users;
(3) For the complexity of the vehicle network, an asynchronous dominant actor-reviewer algorithm is employed to find the optimal scheduling decision.
Drawings
FIG. 1 is a flow chart of a method for intelligent collaborative task computing and on-demand resource allocation in a vehicular environment of the present invention;
FIG. 2 is a graph of delay performance of different algorithms versus vehicle number.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
As shown in fig. 1, the method for calculating intelligent collaborative tasks and distributing resources according to needs in a vehicle-mounted environment comprises the following steps:
according to the running state of the vehicles, calculating to obtain link connectivity between the vehicles and between vehicle facilities by combining the vehicle distance and the communication range;
designing a vehicle-to-vehicle unloading strategy based on link connectivity;
designing a vehicle-to-facility unloading strategy based on link connectivity;
designing a resource management model integrating vehicle-to-vehicle unloading and vehicle-to-facility unloading;
and fifthly, designing an intelligent task calculation and on-demand resource allocation algorithm.
The computing unloading method supported by the multi-resource arrangement architecture provided by the embodiment of the invention comprises the following three aspects:
1. vehicle-to-vehicle computing offload:
modeling is carried out by adopting a free flow traffic model: initializing a vehicle position vector;
connectivity analysis: according to the running direction of the vehicles, calculating to obtain the link connection time between the vehicles by combining the distance between the vehicles and the communication range;
the using process of the surrounding vehicles comprises the following steps: service vehicle discovery, path determination, and vehicle resource allocation;
and calculating one-hop and multi-hop data transmission time based on the transmission rate between two adjacent vehicles, and obtaining total delay by combining the time for processing tasks by the service vehicles, and preferentially selecting the vehicle with the minimum task execution delay.
2. Vehicle-to-infrastructure computing offload:
modeling is carried out by adopting a free flow traffic model: initializing a vehicle position vector and an infrastructure position vector;
connectivity analysis: calculating link connection time according to the distance traveled by the vehicle before the link between the vehicle and the infrastructure is disconnected;
program flow offloaded to edge server: task uploading, task execution and result feedback;
wherein, the task uploading includes:
the transmission delay is related to the bandwidth, distance and the like allocated by the edge server, B j B is bandwidth resource of edge server j ij For bandwidth resources allocated to vehicle i; the data transfer rate between them is then:
wherein Represents bandwidth g V2I (t) represents the channel gain as a function of the inter-vehicle distance;
the delay required by vehicle i to transmit a task to the associated edge server j is:
wherein the task execution includes:
the delay required by edge server k to calculate task i is expressed as:
regarding service migration problems, additional service migration time occurs when k and j are not the same edge server:
the total delay of the vehicle-to-infrastructure offload processing task i is expressed as:
wherein ,denoted as the delay required by vehicle i to transmit a task to the associated edge server j; />Expressed as additional service migration time that would occur if k and j were not the same edge server; />Denoted as the delay required by edge server k to compute task i.
3. Cloud computing:
cloud computing can afford delay tolerant applications, where the cloud can provide the necessary assistance when edge capacity is insufficient to meet user demand;
the total time of the cloud processing task mainly comprises three parts: the transmission time between the client vehicle i and the associated edge server j, i.eThe transmission time between edge server j and cloud, i.e. +.>Time of cloud computing task +.>
Characterizing the user utility according to the task execution delay generated by the calculation unloading method:
and then the optimization problem is formulated:
δ i to describe the priority of the task;
c1 represents that the task may be handled using vehicle-to-vehicle offloading, vehicle-to-infrastructure offloading, or cloud computing;
c2 ensures that a task can only be handled in one computing entity;
c3 and C4 represent constraints in terms of computational resources and bandwidth resources, respectively, F j and Bj A threshold value representing a computing resource and a storage resource;
c5 indicates that the task should be completed successfully before the client vehicle leaves the communication range of its associated edge server, ζ i Representing a time threshold.
To minimize user utility, the optimization problem is restated as a Markov decision problem, the state space, behavior space, and rewards function are determined, and the optimization problem is solved using an asynchronous dominant actor-reviewer algorithm.
On the basis of a multi-resource arrangement architecture, the invention performs joint optimization on task unloading and resource scheduling so as to maximize the system utility and ensure the personalized service demands of different users.
The proposed algorithm is verified through a numerical simulation experiment:
three edge servers are deployed on one straight channel;
the computing power of each road service unit is uniformly distributed in [1,5] gigacycles/second, and the communication range is 1000 meters;
the number of vehicles is selected in [20, 30, 40], with half of the vehicles being customer vehicles and the other vehicles being service vehicles. The computing power of each vehicle is uniformly distributed in [200, 800] megacycles/second, and the communication range is 500 meters;
the task size is uniformly distributed in [1,5] megabits, and the required calculation intensity is 297.62 cycles/bit;
each task has the same priority and channel resources;
the average data transmission rate of the vehicle is 5 megabits/second;
one-hop transmission between two edge servers is 0.2 seconds;
fig. 2 shows the delay performance of all algorithms as the number of vehicles changes, with the selected reference algorithms being a random unload algorithm (Random offloading) and a vehicle handling algorithm (Vehicular processing):
(1) Random offloading algorithm: all tasks generated by the client vehicles are randomly processed by the vehicles or the road service units;
(2) Vehicle treatment: the customer vehicle handles its tasks by vehicle-to-vehicle offloading.
From FIG. 2 we can observe that as the number of vehicles increases, the delay (Latency) exhibited by the three algorithms increases; the proposed algorithm (Our proposed algorithm) fully considers the deep aggregation and efficient collaboration of ubiquitous heterogeneous resources, and has a significant improvement in performance compared with other algorithms. In addition to edge servers, one-hop vehicle resources have been explored to provide services; in particular, a deep reinforcement learning algorithm is adopted to obtain an optimal calculation unloading strategy and resource management decision so as to adapt to the dynamic property, the randomness and the time variability of the vehicle-mounted network.
Variations and modifications to the above would be obvious to persons skilled in the art to which the invention pertains from the foregoing description and teachings. Therefore, the invention is not limited to the specific embodiments disclosed and described above, but some modifications and changes of the invention should be also included in the scope of the claims of the invention. In addition, although specific terms are used in the present specification, these terms are for convenience of description only and do not limit the present invention in any way.
Claims (10)
1. The intelligent collaborative task computing and on-demand resource allocation method in the vehicle-mounted environment is characterized by comprising the following steps:
according to the running state of the vehicles, calculating to obtain link connectivity between the vehicles and between vehicle facilities by combining the vehicle distance and the communication range;
designing a vehicle-to-vehicle unloading strategy based on link connectivity;
designing a vehicle-to-facility unloading strategy based on link connectivity;
designing a resource management model integrating vehicle-to-vehicle unloading and vehicle-to-facility unloading;
designing an intelligent task calculation and on-demand resource allocation algorithm;
in the first step, the vehicles are between the client vehicles and the available surrounding vehicles; the vehicle facilities are between the client vehicles and the infrastructure; the link connectivity may be represented by a link connection time;
in the second step, the vehicle-to-vehicle unloading strategy is designed based on the link connectivity, the link connection time between vehicles is calculated according to the first step, the link connectivity is represented by the link connection time, and finally the vehicle-to-vehicle unloading strategy is designed according to the link connectivity; wherein the vehicle-to-vehicle offload, i.e., the client vehicle offload its tasks to available surrounding vehicles for execution;
considering the free-flow traffic model, the vehicle moves at a constant speed along the x-axis, assuming that vehicle i is located at an initial position (p xi ,p yi ) And at a speed v i Moving in the positive direction;
the link connectivity calculation between vehicles comprises:
for any two adjacent vehicles i and j, when the vehicle is at a distance d between the vehicles at time t ij (t) is smaller than their communication range R v When they can communicate with each other, their link connection times are therefore:
l ij ={mint|d ij (τ)<R v ,0<τ<t};
τ represents any time during the communication period between two adjacent vehicles i and j;
step three, designing a vehicle-to-facility unloading strategy based on the link connectivity, namely calculating the link connection time between the vehicle and the facility according to the step one, using the link connection time to represent the link connectivity, and finally designing the vehicle-to-facility unloading strategy according to the link connectivity; wherein the vehicle-to-facility offload, i.e., the client vehicle offload its tasks to the infrastructure for execution.
2. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicular environment according to claim 1, wherein the calculation of link connectivity from vehicle to vehicle further comprises calculation of link connection time from vehicle j to vehicle i under different conditions;
without loss of generality, vehicle j is in front of vehicle i, R v Representing a communication range of the vehicle;
the calculation of the link connection time between the vehicles j and i under different conditions comprises the following steps:
if v when two vehicles move in the same direction j -v i >0, the link connection time between vehicle j and vehicle i is:
if v when two vehicles move in the same direction j -v i <0, the link connection time between vehicle j and vehicle i is:
when two vehicles move in different directions, the link connection time between vehicle j and vehicle i is:
to achieve vehicle-to-vehicle unloading, it is also necessary to complete the use of available surrounding vehicles.
3. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicular environment according to claim 2, wherein the process of using the available peripheral vehicles includes the steps of:
1) Service vehicle discovery: the client vehicle sends service requests to the single-hop and multi-hop vehicles, the service requests comprise task attributes and service requirements, and once the service requests are received, each vehicle continuously sends the service requests forwards until the maximum hop count is reached; on the other hand, the vehicle judges whether it satisfies a specified criterion, and if the specified criterion is satisfied, the vehicle agrees with the service request and serves as a service candidate vehicle by transmitting its state information to the client vehicle; for any vehicle, it may act as a service vehicle only if the routing path connectivity between the computing result and the client vehicle remains valid until the computing result is successfully sent to the client vehicle;
2) Routing path determination: there may be more than one routing path between a client vehicle and its candidate serving vehicle, typically a routing path consisting of a plurality of one-hop links; frequent link breaks in the vehicle network are unavoidable due to the mobility of the vehicle; thus, link connectivity is generally considered an important indicator in determining an optimal routing path, with the routing path having the greatest connectivity being preferred, as it can greatly increase the likelihood of successful transmission of the calculation result from the service vehicle to the customer vehicle;
3) Task allocation policy: the client-side vehicle collects all information related to the candidate service vehicle, and makes an optimal calculation unloading decision according to task service requirements and certain constraint conditions; according to the unloading decision, the task is sent to a proper service vehicle for calculation, and finally, the calculation result from the service vehicle is returned to the client vehicle;
the transmission rate between two adjacent vehicles i and j at time t is expressed as:
wherein Representing bandwidth, P t Represents the transmit power g V2V (t) represents the channel gain, which can be expressed as a function of the inter-vehicle distance, while ω is used to represent the noise power;
the average value of the transmission rate is:
given an input task size s i The one-hop data transmission time between vehicles i and j is:
the time required for data to be sent from client vehicle i to service vehicle k along the multi-hop routing path is given by the following equation:
the time required for the service vehicle k to handle the task offloaded by the client vehicle i is calculated as:
wherein ,ci Representing the CPU cycles required to process the task; f (f) k Representing the computing power of the service vehicle;
the total time required to complete the task offloading is:
all available candidate service vehicles form a set theta, and vehicles with minimum task execution delay are preferentially selected:
4. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicular environment according to claim 1, wherein the link connection time between vehicles and facilities is computed comprising:
assume that the coordinates of the road service unit k associated with the vehicle i areThe distance that the vehicle i should travel before leaving the communication range of this road service unit is:
wherein Rr Indicating the communication range of the road service unit,
the time required for the vehicle i to leave the communication range of the road service unit k is:
to achieve vehicle-to-infrastructure offloading, the following offloading procedures also need to be completed:
the process is as follows: uploading a task;
a second flow: executing tasks;
and a process III: and (5) result feedback.
5. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicle-mounted environment according to claim 4, wherein in a first process, the task upload comprises:
for vehicle-to-infrastructure offloading, each task vehicle first transmits its task to an associated edge server, with transmission delays related to the bandwidth, distance, etc. allocated by that edge server;
B j b is bandwidth resource of edge server j ij For bandwidth resources allocated to vehicle i;
the data transfer rate between the vehicle and the infrastructure is:
wherein ,representing bandwidth; p (P) t Representing the transmit power; w represents noise power; g V2I (t) represents the channel gain as a function of the inter-vehicle distance;
the sum of all bandwidth resources allocated to the client vehicle within the edge server j communication range cannot exceed B j :
Due to the mobility of the vehicles, the distance between them varies with time, thus causing a time-varying transmission ratio,
the transmission rate average value is:
the delay required by vehicle i to transmit a task to the associated edge server j is:
wherein ,si Indicating the input task size.
6. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicular environment according to claim 5, wherein in a second flow, the task is executed, comprising:
the client vehicle i uploads its task to the edge server j nearby and associated with it, receives the offloaded task, and the edge server j can either process it locally or migrate it to other available edge servers;
when the computing power of the edge server is insufficient to provide the service, service migration occurs, so that load balancing can be realized, but additional expenditure is generated; representing k as an edge server selected for executing the task, competing with each other for competing limited channel resources between different client vehicles in a service area covered by the same edge server, wherein each edge server should reasonably allocate computing resources to realize the tasks of different client vehicles;
when servicing multiple tasks, the selected edge server should effectively allocate limited computing resources to the different tasks;
will F k Denoted as computing power of edge server k, will f ik Representing the computing resources allocated by server k to task i;
thus, the delay required by edge server k to calculate task i is expressed as:
the sum of all computing resources allocated to the user is smaller than the computing power of edge server k:
wherein ,Fj Computing resource denoted edge server j, f ij Representing computing resources allocated to task i for server j;
if k and j are the same edge server, there is no service migration; otherwise, additional service migration time will occur:
wherein ,hij Is the number of hops required for task migration between two edge servers, λ is one-hop migration time;
the total delay of the vehicle-to-infrastructure offload processing task i is expressed as:
7. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicle-mounted environment according to claim 6, wherein in a third process, the result feedback comprises:
after the calculation is completed, the selected edge server is responsible for sending the calculation result to the client vehicle, and the transmission time required for returning the calculation result is ignored because the calculation result is much smaller than the input data;
to ensure that the calculation result can be successfully sent to the client vehicle, the total processing delay should not exceed the connection time between the client vehicle and the associated edge server:
based on the actual deployment location of the user, cloud computing may afford delay tolerant applications, while edge computing focuses on providing delay sensitive services;
when the edge capacity is insufficient to meet the user's needs, the cloud may provide the necessary assistance, the cloud processing the total time of the task from the customer vehicle iMainly comprises the following three parts:
8. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicular environment according to any one of claims 3 or 7, wherein in step four, the design incorporates a vehicle-to-vehicle offload and a vehicle-to-facility offload resource management model, including a user utility model built according to delays generated by tasks at different execution locations;
the user utility may be characterized by the delay in its task execution:
wherein ,xi0 ,x ij ,x i(M+1) E {0,1} as an offload decision represents whether the task is to be performed by vehicle-to-vehicle offload, vehicle-to-infrastructure offload, or by the cloud, respectively, wherein
Our goal is to minimize the system-wide user utility, which is defined as the weighted sum of all user utilities;
the resource allocation policies of different edge servers are expressed as:the channel resource scheduling strategies of different edge servers are expressed as:>
then, an optimization problem is formulated:
wherein ,δi To describe the priority of the task;
c1 represents that the task may be handled using vehicle-to-vehicle offloading, vehicle-to-infrastructure offloading, or cloud computing;
c2 ensures that a task can only be handled in one computing entity;
c3 and C4 represent constraints in terms of computational resources and bandwidth resources, respectively, F j and Bj A threshold value representing a computing resource and a storage resource;
c5 indicates that the task should be completed successfully before the vehicle leaves the communication range of its associated edge server, ζ i Representing a time threshold.
9. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicle-mounted environment according to claim 8, wherein in step five, the designing of the intelligent task computing and on-demand resource allocation algorithm comprises:
will optimize the problemRestated as a Markov decision problem, a state space, an action space and a reward function are determined, and the optimization problem is solved using an asynchronous dominant actor-reviewer algorithm>
For the intelligent algorithm, at each point in time t, a state s is observed t Generates an action a t Then obtains a prize r by performing this action t The method comprises the steps of carrying out a first treatment on the surface of the The goal of the intelligent algorithm is to find a strategy that maximizes the jackpot by mapping a state to an action; the global network of the intelligent algorithm is mainly a public neural network model and comprises an actor network and a reviewer network;
there are a plurality of learning agents having the same network structure as the global network; each learning agent interacts with the environment independently and without interference; after each learning agent interacts with the environment, calculating the gradient of the neural network loss function of the learning agent, and independently updating the sharing parameters of the global network; at intervals, each agent updates its own neural network parameters using the global network parameters.
10. The method for intelligent collaborative task computing and on-demand resource allocation in a vehicular environment according to claim 9, wherein the markov decision problem is defined by an arrayDescription of the invention wherein->Representing state space, ++>Representing the action space->Representing transition probability +.>For representing rewards:
1) State space: the state space mainly comprises available computing resources and positions of an edge server, available computing resources and positions of a vehicle, the number and the size of tasks and network bandwidth resources;
2) Action space: the action space comprises an unloading decision and a resource allocation strategy;
3) Rewarding: given a state space and an action space, rewards are designed that minimize the utility of the system according to an objective function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310109857.6A CN116261119A (en) | 2023-02-14 | 2023-02-14 | Intelligent collaborative task calculation and on-demand resource allocation method in vehicle-mounted environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310109857.6A CN116261119A (en) | 2023-02-14 | 2023-02-14 | Intelligent collaborative task calculation and on-demand resource allocation method in vehicle-mounted environment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116261119A true CN116261119A (en) | 2023-06-13 |
Family
ID=86687472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310109857.6A Pending CN116261119A (en) | 2023-02-14 | 2023-02-14 | Intelligent collaborative task calculation and on-demand resource allocation method in vehicle-mounted environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116261119A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116723526A (en) * | 2023-08-08 | 2023-09-08 | 北京航空航天大学 | Unmanned aerial vehicle-assisted network-connected vehicle queue random task allocation decision-making method |
CN117097619A (en) * | 2023-10-20 | 2023-11-21 | 北京航空航天大学 | Method and system for optimizing configuration of general computing memory resources by vehicle-road cloud cooperation |
-
2023
- 2023-02-14 CN CN202310109857.6A patent/CN116261119A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116723526A (en) * | 2023-08-08 | 2023-09-08 | 北京航空航天大学 | Unmanned aerial vehicle-assisted network-connected vehicle queue random task allocation decision-making method |
CN116723526B (en) * | 2023-08-08 | 2023-10-24 | 北京航空航天大学 | Unmanned aerial vehicle-assisted network-connected vehicle queue random task allocation decision-making method |
CN117097619A (en) * | 2023-10-20 | 2023-11-21 | 北京航空航天大学 | Method and system for optimizing configuration of general computing memory resources by vehicle-road cloud cooperation |
CN117097619B (en) * | 2023-10-20 | 2023-12-15 | 北京航空航天大学 | Method and system for optimizing configuration of general computing memory resources by vehicle-road cloud cooperation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111132077B (en) | Multi-access edge computing task unloading method based on D2D in Internet of vehicles environment | |
CN109391681B (en) | MEC-based V2X mobility prediction and content caching offloading scheme | |
Yang et al. | A parallel intelligence-driven resource scheduling scheme for digital twins-based intelligent vehicular systems | |
CN116261119A (en) | Intelligent collaborative task calculation and on-demand resource allocation method in vehicle-mounted environment | |
CN111586696A (en) | Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning | |
CN113543074B (en) | Joint computing migration and resource allocation method based on vehicle-road cloud cooperation | |
CN112685186B (en) | Method and device for unloading computing task, electronic equipment and storage medium | |
CN111741448B (en) | Clustering AODV (Ad hoc on-demand distance vector) routing method based on edge computing strategy | |
CN114143346B (en) | Joint optimization method and system for task unloading and service caching of Internet of vehicles | |
Wu et al. | Load balance guaranteed vehicle-to-vehicle computation offloading for min-max fairness in VANETs | |
CN110012039A (en) | Task distribution and power control scheme in a kind of car networking based on ADMM | |
CN115134242B (en) | Vehicle-mounted computing task unloading method based on deep reinforcement learning strategy | |
CN114449477B (en) | Internet of vehicles content distribution method based on edge caching and immune cloning strategy | |
Mekki et al. | Vehicular cloud networking: evolutionary game with reinforcement learning-based access approach | |
CN115052262A (en) | Potential game-based vehicle networking computing unloading and power optimization method | |
Le Nguyen et al. | Modeling and minimizing latency in three-tier v2x networks | |
CN112929850A (en) | Internet of vehicles data returning method facing edge computing environment | |
CN114863683B (en) | Heterogeneous Internet of vehicles edge computing unloading scheduling method based on multi-objective optimization | |
CN113709249A (en) | Safe balanced unloading method and system for driving assisting service | |
Zhang et al. | New Method of Edge Computing-Based Data Adaptive Return in Internet of Vehicles | |
CN113691956B (en) | Internet of vehicles mobility management method based on SDN and MEC | |
CN116321298A (en) | Multi-objective joint optimization task unloading strategy based on deep reinforcement learning in Internet of vehicles | |
CN114928611A (en) | Internet of vehicles energy-saving calculation unloading optimization method based on IEEE802.11p protocol | |
Wang et al. | Joint offloading decision and resource allocation in vehicular edge computing networks | |
US11985186B1 (en) | Method of drone-assisted caching in in-vehicle network based on geographic location |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |