CN113613206A - Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning - Google Patents

Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning Download PDF

Info

Publication number
CN113613206A
CN113613206A CN202010537028.4A CN202010537028A CN113613206A CN 113613206 A CN113613206 A CN 113613206A CN 202010537028 A CN202010537028 A CN 202010537028A CN 113613206 A CN113613206 A CN 113613206A
Authority
CN
China
Prior art keywords
vehicles
task
vehicle
user
time slot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010537028.4A
Other languages
Chinese (zh)
Inventor
李帆远
林艳
闫帅
彭诺蘅
张一晋
束锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010537028.4A priority Critical patent/CN113613206A/en
Publication of CN113613206A publication Critical patent/CN113613206A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0226Traffic management, e.g. flow control or congestion control based on location or mobility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0247Traffic management, e.g. flow control or congestion control based on conditions of the access network or the infrastructure network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/46Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0203Power saving arrangements in the radio access network or backbone network of wireless communication networks
    • H04W52/0206Power saving arrangements in the radio access network or backbone network of wireless communication networks in access points, e.g. base stations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning. In the scheme, the user vehicle supports two unloading modes of V2I and V2V. Implementing the present invention first requires the AP to observe and collect environmental information in the road segment at the beginning of each time slot, including the location of all base station vehicles, user vehicles, and the channel gains of all V2I channels and V2V channels in the road segment. Secondly, based on the collected environment states, task unloading selection of all the user vehicles in the time slot is determined through the DQN network. The AP then broadcasts the offload selection to all relevant vehicles, causing each user vehicle to offload tasks to the target edge server. And finally, at the end of the time slot, the AP receives the feedback of all the user vehicles to the calculation rate of the time slot, and takes the feedback function as a return to train the DQN network. The method can achieve the purpose of obtaining the optimal calculation task unloading selection of the user vehicle through training in the vehicle networking environment with different vehicle numbers and random variation, and provides decision for the vehicle networking application of calculation intensive and delay sensitive calculation tasks.

Description

Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning
Technical Field
The invention relates to the technical field of Internet of things, in particular to mobile edge computing and Internet of vehicles.
Background
The concept of Mobile Edge Computing (MEC), originally proposed by the European Telecommunications Standards Institute (ETSI) in 2014, was defined as a new platform that provides IT and cloud Computing capabilities in the vicinity of users in radio access networks. In the MEC-based Internet of Things (IoT), a device may offload all or part of the computing task to an MEC server to speed up the computing of the task and save the energy of the device. At this time, the main technical problem becomes whether, when, how many computing tasks should be offloaded. Currently, many documents have designed optimal strategies for this problem, and these strategies also meet a wide variety of performance requirements. However, in the current literature, vehicles exist in the MEC network only as serviced customers, and edge servers in the MEC network are static. Due to the explosive Service demand of a large number of user devices, this may result in a "Service Hole", i.e., the conventional edge server cannot cope with communication and computation demand bursts of all users. Furthermore, in the document on DQN, since discretized channel gains are used as input state vectors, the performance is severely impaired by the increase in dimensionality, and the algorithm converges slowly when the model requires high channel quantization accuracy.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a reinforcement learning-based wireless heterogeneous Internet of vehicles edge unloading scheme, wherein vehicles and a base station in the traditional MEC architecture can be used as an edge server to provide calculation unloading service for users; the technical scheme for realizing the purpose of the invention is as follows: the scheme for unloading the wireless heterogeneous Internet of vehicles on the basis of reinforcement learning comprises the following specific steps:
step 1, at the beginning of each time slot, the AP observes and collects environmental information in the road segment, including the location of all base station vehicles, user vehicles, and channel gains of all V2I channels and V2V channels in the road segment.
And 2, determining task unloading selection of all user vehicles in the time slot according to the collected environment states and the DQN network. The selection includes a V2I unloading scheme and a V2V unloading scheme.
And 3, broadcasting the unloading selection to all relevant vehicles by the AP, so that each user vehicle unloads the task to the target edge server.
And 4, at the end of each time slot, the AP receives feedback of all user vehicles on the calculation rate of the time slot.
And step 5, training the DQN network by taking the feedback function as a return.
And 6, returning to the step 1 until no user vehicle exists in the road section.
Compared with the prior art, the invention has the following remarkable advantages: the unloading scheme provided by the invention can expand the range of computing service and improve the expandability of the MEC network. Under the condition of meeting the maximum acceptable time delay, the scheme provided by the invention improves the resource utilization rate to a certain extent and has higher convergence rate.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a schematic diagram of the V2I offloading scheme infrastructure and offloading steps.
Fig. 3 is a typical example of the unloading process of V2I, which is composed of 3 slots, and illustrates the process of a new task entering the queue and a task in the queue dequeuing due to completion.
Fig. 4 is a schematic diagram of the V2V offloading scheme infrastructure and offloading steps.
Fig. 5 shows the convergence performance of the DQN algorithm in example 1. The abscissa is the training epamode number and the ordinate is the total calculation rate of the unloading scheme.
Fig. 6 shows the unloading scheme proposed by the present invention in example 1 and two reference schemes, pure V2V and pure V2I, compared in performance among different numbers of vehicles.
Detailed Description
The present invention is further illustrated by the following figures and specific examples, which are to be construed as merely illustrative and not limitative of the remainder of the disclosure, and all changes and modifications that would be obvious to those skilled in the art are intended to be included within the scope of the present invention and the appended claims are intended to be embraced therein.
Given that all user vehicles have their own compute-intensive tasks and that the vehicle's computing unit cannot complete the task within a specified time delay, it must be offloaded to the base station vehicle or AP. The time is divided into a plurality of time slots (time slots), and the user vehicle can choose to unload the own computing task to any edge server in the road section in each time slot. If the user Vehicle selects an unloading task to the base station Vehicle in a certain time slot, the user Vehicle is called to select a Vehicle-to-Vehicle (V2V) unloading scheme; if offloading to the AP is selected, it is said to adopt a Vehicle to base station (V2I) offloading scheme. Consider the problem of which unloading target each customer vehicle selects at a certain time slot n e t. Order to
ci,n∈{-1,0,1,...,NU-1}
Selecting variables for user vehicles i e U in time slot t, wherein when ciWhen the value is-1, the user vehicle i selects the unloading mode of V2I, i.e. unloading the task to the AP. When c is going toiAt > 0, the user vehicle i selects the V2V unload mode, and ciThe value of (d) is the serial number of the target base station vehicle.
The invention adopts DQN algorithm, helps train DQN network by observing the environment of the Internet of vehicles and receiving the calculation rate feedback of the user vehicles, and finally makes all the user vehicles in the Internet of vehicles make the best calculation task unloading selection. The flow chart of the scheme is shown in figure 1.
To implement the V2I scheme, a task Queue (Queue) is introduced. The queue can be implemented with any memory such as SRAM, DDR. And the task is uploaded to the AP from the user vehicle and then enters the tail of the queue. Meanwhile, the MEC server takes out the calculation task calculation from the head of the queue. The architecture of the model is shown in fig. 2. For the completion of each task, the following four phases need to be gone through:
1) stage 1: a user vehicle uploads a task file to the AP;
2) and (2) stage: the task enters a queue to wait for calculation;
3) and (3) stage: tasks are calculated on the MEC server;
4) and (4) stage: and returning the task calculation result to the user vehicle.
Consider a V2I parallel offload process consisting of 3 slots, as in fig. 3, to further illustrate the steps of the V2I offload approach. In time slot 0, the queue is initially empty, and the MEC server is idle without task execution. User vehicle 0 and user vehicle 1 choose to upload the task file in the mode of V2I, and add task0 and task1 to the queue at the end of time slot 0 to wait for calculation. At the beginning of slot 1, the MEC server fetches task0 from the head of the queue and performs the calculation. Meanwhile, the user vehicles 2 and 3 select to upload the task file in a V2I mode, and the task2 and the task3 are added to the queue at the end of the time slot 1 to wait for calculation. At the beginning of slot 2, the MEC server fetches task1, 2 from the head of the queue for calculation, and the slot has no user vehicle option V2I to unload. Finally, there is task3 in the queue at the end of slot 2, waiting for future slot computations.
In a similar manner to V2I, a base station vehicle selected by a user vehicle in V2V mode needs to complete three phases of work within one time slot:
1) stage 1: firstly, receiving ITF transmitted by a user vehicle;
2) and (2) stage: then, the ITF is calculated on a vehicle-mounted calculating unit to obtain a corresponding OTF;
3) and (3) stage: and finally, transmitting the OTF back to the user vehicle.
Let b be the time allocation factor, satisfy 0< b <1, the time spent by phase 1 is set to be bT, and the time spent by phase 2 is (1-b) T. Since the size of the OTF is typically much smaller than the ITF, the time consumption of stage 3 is ignored. The case where a pair of user vehicle and base station vehicle choose the V2V mode of unloading is shown in fig. 4.
Example 1
The algorithm is implemented on a Python platform by adopting the chapter, and the deep neural network is realized based on TensorFlow and Keras. Consider a two-lane, one-way road segment 500 meters long. And 4 base station vehicles and 4 user vehicles are distributed on the road section. The neural network adopts a 5-layer structure, wherein all three hidden layers have 128 nodes, a relu activation function is adopted, and a linear activation function is adopted in an output layer. The learning rate of the DQN agent is 0.001, and ∈ is 0.1. For the serial model and the parallel model, γ is 0.1 and γ is 0.8, respectively.
Fig. 5 shows the convergence performance of the reinforcement learning-based wireless heterogeneous edge offloading scheme proposed by the present invention. It can be observed that as the number of epicodes increases, the calculation rate of the model increases significantly until a relatively stable value is reached. Therefore, the scheme provided by the invention has better convergence. Since both the vehicle movement and the communication channel have a certain randomness, large fluctuations in the computation rate are observed upon convergence.
Fig. 6 depicts the calculation rates that can be achieved for the three offloading schemes at different base station and user vehicle numbers in the road segment. It is assumed here that the number of base station vehicles is the same as the number of user vehicles, and all other parameters are the same. To reduce random fluctuations of the model, the data were averaged over 50 runs. The pure V2V (V2V only) and pure V2I (V2I only) protocols are comparative protocols. In the pure V2V scenario, all user vehicles choose to unload in the V2V mode at each timeslot; in the pure V2I scenario, all user vehicles choose to unload in the V2I mode at every timeslot.
Among the three schemes, the scheme provided by the invention can achieve the highest calculation rate. The reason for this is as follows: compared with the pure V2V scheme and the pure V2I scheme, the DQN of the scheme provided by the invention is subjected to long-enough reinforced learning training. Therefore, the scheme provided by the invention can realize the selection of the optimal unloading scheme under different vehicle positions and channel states, and can more fully utilize the computing resources of the AP and the base station vehicle.

Claims (8)

1. The utility model provides a wireless heterogeneous car networking edge uninstallation scheme based on reinforcement learning which characterized in that, concrete step is:
step 1, at the beginning of each time slot, the AP observes and collects environmental information in a road section, wherein the environmental information comprises the positions of all base station vehicles and user vehicles in the road section and the channel gains of all V2I channels and V2V channels;
and 2, determining task unloading selection of all user vehicles in the time slot according to the collected environment states and the DQN network. Selecting two unloading modes including V2I and V2V;
step 3, the AP broadcasts the unloading selection to all relevant vehicles to enable each user vehicle to unload tasks to the target edge server;
step 4, at the end of each time slot, the AP receives the feedback of all the user vehicles to the calculation rate of the time slot;
step 5, training the DQN network by taking the feedback function as a return;
and 6, returning to the step 1 until no user vehicle exists in the road section.
2. The reinforcement learning-based wireless heterogeneous internet of vehicles edge offload scheme of claim 1, wherein the infrastructure comprises:
on a two-lane one-way road in an urban environment, one side of the road is provided with a base station (Access Point, AP), and the AP is connected with a mobile edge computing server (MEC server) and respectively responsible for a communication function and a task computing function. A plurality of vehicles move on a road, wherein the vehicles are User vehicles (User vehicles) and Base vehicles (Base vehicles), and each vehicle is provided with a communication module and can form a vehicle network together with an AP; the user vehicles are here represented as a set:
Figure FDA0002537426000000011
wherein N isURepresenting the number of user vehicles on the road; base station vehicles are represented as a set
Figure FDA0002537426000000012
Wherein N isBRepresentative roadThe number of base station vehicles on the road;
to quantify the position of the vehicle and the AP, a three-dimensional coordinate system (x, y, z) e R is established3Wherein the x-axis is along the direction of the road and the positive direction points to the direction of travel of the one-way road; the direction of the y axis is vertical to the road and is orthogonal to the x axis; the direction of the z axis is vertical and upward; setting the coordinates of the AP as (L/2, 0, H), wherein L represents the length of a road section in the coverage range of the AP signal; the coordinates of all vehicles on the first lane satisfy: y is1=WlaneAnd/2, all vehicles on the second lane meet the following conditions: y is2=3Wlane/2 wherein WlaneIs the lane width. Regardless of the height of the vehicle, i.e., the z-coordinate of all vehicles is set to 0;
each user vehicle has its own compute-intensive task, and the vehicle's compute unit cannot complete the task within a specified time delay, and must offload it to a base station vehicle or AP; the computing unit of the base station vehicle is in an idle state, and can compute the computation task unloaded by a single user vehicle in a time slot; the MEC server connected with the AP has strong computing capacity and can unload computing tasks for serving single or multiple user vehicles in one time slot.
3. The reinforcement learning-based wireless heterogeneous internet of vehicles edge offloading scheme of claim 1, wherein the time is divided into several time slots, and the user vehicle can choose to offload its own computing task to any edge server (AP or base station vehicle) in the road segment in each time slot. If the user Vehicle selects an unloading task to the base station Vehicle in a certain time slot, the user Vehicle is called to select a Vehicle-to-Vehicle (V2V) unloading mode; if offloading to the AP is selected, it is said to be in a Vehicle to base station (V2I) offloading mode.
4. The reinforcement learning-based wireless heterogeneous internet of vehicles edge offload scheme according to claim 1, wherein the calculation of tasks in the scheme is to input tasks into a file ITF; a process of converting to a task output file otf (output task file). This process is costlyIs proportional to the size of the ITF; the ITF size of vehicle i is denoted by UiThe unit is bit; the symbol of the cycle number of the processor running per second of the edge server is represented as i, and the unit is cycle/s; the time complexity of the task is related to the characteristics of the task and the algorithm adopted, and is expressed as the number of processor cycles needed for calculating 1-bit ITF, wherein a constant value is assumed, the symbol is represented as phi, and the unit is cycle/bit; in summary, the time of task calculation is:
Figure FDA0002537426000000021
5. the reinforcement learning-based wireless heterogeneous internet of vehicles edge offloading scheme of claim 3, wherein in the V2I offloading scheme, there is a task queue; the queue can be any memory such as SRAM and DDR; the AP needs to complete four phases of work in one slot:
1) stage 1: receiving a task file uploaded by a user vehicle;
2) and (2) stage: putting the tasks into a queue to wait for calculation;
3) and (3) stage: sequentially taking out the head of the queue from the head of the queue, and calculating on an MEC server;
4) and (4) stage: and returning the task calculation result to the user vehicle.
6. The reinforcement learning-based wireless heterogeneous internet of vehicles edge offloading scheme of claim 5, wherein in stage 3, a certain task t is considerediThe task belongs to the user's vehicle
Figure FDA0002537426000000022
In time slot teIs uploaded with a maximum tolerable delay tdelay(tdelay> T). The size of the ITF uploaded at stage 1 is:
Ui=TCi,AP
at a stage2,tiAnd its incidental information into a quadruple (t)i,i,te,tdelay) Adding into a queue; in stage 3, tiThe time taken for execution is:
Figure FDA0002537426000000031
the MEC server extracts the next task from the head of the queue immediately after the existing task is executed. First analyze the time it takes to execute if the current time tcurrentSatisfies the following conditions:
tcurrent+texecute-te<tdelay
the task can be completed within an acceptable maximum latency. The MEC server executes the task and removes the task from the queue head; otherwise, if the formula is not satisfied, the fact that the task cannot be completed within the acceptable maximum time delay is judged, the MEC server gives up the task, meanwhile, the MEC server moves the task out of the queue head, and then the execution time of the next task at the queue head is analyzed; and if the queue is empty after the task is executed, the MEC server enters an idle state until a new task is added into the queue.
7. The reinforcement learning-based wireless heterogeneous internet of vehicles edge offloading scheme of claim 3, wherein in the V2V offloading scheme, only one user vehicle can be served by one base station vehicle for task offloading; the base station vehicle selected by the user vehicle in the V2V mode needs to complete three phases of work within one time slot:
1) stage 1: firstly, receiving ITF transmitted by a user vehicle;
2) and (2) stage: then, the ITF is calculated on a vehicle-mounted calculating unit to obtain a corresponding OTF;
3) and (3) stage: and finally, transmitting the OTF back to the user vehicle.
8. The reinforcement learning-based wireless heterogeneous internet of vehicles edge offloading scheme of claim 1, wherein the markov decision process in DQN algorithm is structured as:
the intelligent agent: a base station giving an unloading scheme; it interacts with environments with randomness, i.e., traffic conditions and channel conditions, with the goal of maximizing returns.
The state is as follows: including the location of all base and user vehicles within a time slot, the gain of all V2I and V2V channels, and the length of the task queue, in detail:
Figure FDA0002537426000000032
the actions are as follows: the action of a certain timeslot is the unloading scheme c of all user vehicles:
Figure FDA0002537426000000041
and (3) returning: the calculation rate Q of a certain time slot;
in order to improve the training effect, a certain linear transformation is needed to be performed on Q, so that the return values are positive and negative, and are approximately between [ -1, 1]
Figure FDA0002537426000000042
Wherein QbIs an offset value, satisfies QbIs greater than 0. The specific value can be estimated by experiments.
CN202010537028.4A 2020-06-12 2020-06-12 Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning Pending CN113613206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010537028.4A CN113613206A (en) 2020-06-12 2020-06-12 Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010537028.4A CN113613206A (en) 2020-06-12 2020-06-12 Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning

Publications (1)

Publication Number Publication Date
CN113613206A true CN113613206A (en) 2021-11-05

Family

ID=78336320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010537028.4A Pending CN113613206A (en) 2020-06-12 2020-06-12 Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning

Country Status (1)

Country Link
CN (1) CN113613206A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363857A (en) * 2022-03-21 2022-04-15 山东科技大学 Method for unloading edge calculation tasks in Internet of vehicles
CN114531669A (en) * 2022-01-14 2022-05-24 山东师范大学 Task unloading method and system based on vehicle edge calculation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145387A (en) * 2017-05-23 2017-09-08 南京大学 A kind of method for scheduling task learnt under vehicle-mounted net environment based on deeply
CN109391681A (en) * 2018-09-14 2019-02-26 重庆邮电大学 V2X mobility prediction based on MEC unloads scheme with content caching
CN109756378A (en) * 2019-01-12 2019-05-14 大连理工大学 A kind of intelligence computation discharging method under In-vehicle networking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107145387A (en) * 2017-05-23 2017-09-08 南京大学 A kind of method for scheduling task learnt under vehicle-mounted net environment based on deeply
CN109391681A (en) * 2018-09-14 2019-02-26 重庆邮电大学 V2X mobility prediction based on MEC unloads scheme with content caching
CN109756378A (en) * 2019-01-12 2019-05-14 大连理工大学 A kind of intelligence computation discharging method under In-vehicle networking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张海波等: ""SDN和MEC架构下V2X卸载与资源分配"", 《通信雪报》, pages 2 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114531669A (en) * 2022-01-14 2022-05-24 山东师范大学 Task unloading method and system based on vehicle edge calculation
CN114363857A (en) * 2022-03-21 2022-04-15 山东科技大学 Method for unloading edge calculation tasks in Internet of vehicles
CN114363857B (en) * 2022-03-21 2022-06-24 山东科技大学 Method for unloading edge calculation tasks in Internet of vehicles

Similar Documents

Publication Publication Date Title
CN113612843B (en) MEC task unloading and resource allocation method based on deep reinforcement learning
CN111586696B (en) Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning
CN109068391B (en) Internet of vehicles communication optimization algorithm based on edge calculation and Actor-Critic algorithm
CN112004239A (en) Computing unloading method and system based on cloud edge cooperation
CN110650457B (en) Joint optimization method for task unloading calculation cost and time delay in Internet of vehicles
CN111464976A (en) Vehicle task unloading decision and overall resource allocation method based on fleet
CN112188627B (en) Dynamic resource allocation strategy based on state prediction
CN113613206A (en) Wireless heterogeneous Internet of vehicles edge unloading scheme based on reinforcement learning
CN110545584A (en) Communication processing method of full-duplex mobile edge computing communication system
CN113727306B (en) Decoupling C-V2X network slicing method based on deep reinforcement learning
Ouyang Task offloading algorithm of vehicle edge computing environment based on Dueling-DQN
CN113961264A (en) Intelligent unloading algorithm and system for video monitoring cloud edge coordination
Li et al. New sdn-based architecture for integrated vehicular cloud computing networking
CN111511028A (en) Multi-user resource allocation method, device, system and storage medium
He et al. An offloading scheduling strategy with minimized power overhead for internet of vehicles based on mobile edge computing
Li et al. Deep reinforcement learning based computing offloading for MEC-assisted heterogeneous vehicular networks
CN115964178A (en) Internet of vehicles user computing task scheduling method and device and edge service network
Ma et al. Edge computing and UAV swarm cooperative task offloading in vehicular networks
CN114928611B (en) IEEE802.11p protocol-based energy-saving calculation unloading optimization method for Internet of vehicles
CN115150892A (en) VM-PM (virtual machine-to-PM) repair strategy method in MEC (media independent center) wireless system with bursty service
Wang et al. Actor-Critic Based DRL Algorithm for Task Offloading Performance Optimization in Vehicle Edge Computing
Gu et al. Mrprs: A maximum reward based resource scheduling mechanism in vehicular cloud computing
Zhao et al. Deep Reinforcement Learning-Based Task Offloading for Parked Vehicle Cooperation in Vehicular Edge Computing
Xie et al. An energy-efficient resource allocation strategy in massive MIMO-enabled vehicular edge computing networks
CN114928893B (en) Architecture based on intelligent reflecting surface and task unloading method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination