CN114598702A - VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning - Google Patents

VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning Download PDF

Info

Publication number
CN114598702A
CN114598702A CN202210172797.8A CN202210172797A CN114598702A CN 114598702 A CN114598702 A CN 114598702A CN 202210172797 A CN202210172797 A CN 202210172797A CN 114598702 A CN114598702 A CN 114598702A
Authority
CN
China
Prior art keywords
rendering
aerial vehicle
unmanned aerial
representing
drone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210172797.8A
Other languages
Chinese (zh)
Inventor
丁晟杰
刘娟
谢玲富
屈龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202210172797.8A priority Critical patent/CN114598702A/en
Publication of CN114598702A publication Critical patent/CN114598702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Analysis (AREA)
  • Signal Processing (AREA)
  • Computational Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a VR service unmanned aerial vehicle edge calculation method based on deep learning, which relates to the field of unmanned aerial vehicle virtual technology, and comprises the following steps: s1: rendering in an unmanned aerial vehicle (MEC) system through a preset VR rendering mode, wherein the unmanned aerial vehicle MEC system comprises an unmanned aerial vehicle and a plurality of VR equipment; s2: obtaining delay and energy consumption of the VR device in VR service rendering according to the step S1, and determining that the VR service requested by the VR device is finished when the rendering delay does not exceed a set value; and optimizing the VR service rendering completion rate in the T time slots through a preset optimization process, wherein the constraint condition is that the total energy consumption of each VR device is less than or equal to a given threshold. According to the method, the flight path of the unmanned aerial vehicle and the VR service rendering mode can be optimized in a combined mode under the constraint of VR service characteristics and equipment energy, the rendering completion rate of VR services is maximized, and the robustness of the system is improved.

Description

VR service unmanned aerial vehicle edge calculation method based on deep learning
Technical Field
The invention relates to the technical field of unmanned aerial vehicle virtualization, in particular to a VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning.
Background
With the development and commercialization of 5G technology, VR (Virtual Reality) applications supported by the technology bring brand new technological life experiences to people. At present, a simulation scene is one of key applications of a VR technology, and various foreground interaction information and rich background environment rendering information are involved in implementation, which requires that VR equipment has enough energy, memory and computing resources, and ensures real-time processing to improve the immersive experience quality of a user. However, with the significant increase of VR service data traffic, the computing power of the portable VR device is limited, and the processing cannot be completed within a specified delay threshold, which makes it difficult to meet the experience quality requirement of the user. To address this challenge, edge computing servers are deployed at the edge of the wireless network, bringing the computing resources closer to the VR device, providing a sink computing service that assists the VR device in completing the rendering process in real-time.
The unmanned aerial vehicle is widely applied by virtue of the characteristics of high flexibility, low deployment cost and the like, and the unmanned aerial vehicle carrying the edge computing platform, namely the Mobile Edge Computing (MEC) unmanned aerial vehicle is deployed in a wireless network, so that the deployment cost of network fixed infrastructure can be saved, and the movable high-performance computing power is provided for VR users as required.
In the existing research, an unmanned aerial vehicle is proposed to serve as a mobile computing server to help a user complete a computing task, resource scheduling and unmanned aerial vehicle trajectory are optimized in a combined mode, total weighted energy consumption of the unmanned aerial vehicle and the user is minimized, but delay sensitivity of the computing task is not considered, and the unmanned aerial vehicle cannot be applied to VR services. Aiming at the characteristics of ultra-large calculation amount and ultra-low delay of VR service, the content transmission problem of VR users is researched, the performance of a VR system is improved by utilizing cache, and balance among communication, calculation and cache is made. However, due to the diversity of foreground interaction information, the problem of real-time requirements of rendering processing cannot be solved essentially by rendering the background in the cache part.
Disclosure of Invention
The problem to be solved by the invention is how to perform joint optimization on the flight path of the unmanned aerial vehicle and the VR service rendering mode under the constraint of VR service characteristics and equipment energy, so that the rendering completion rate of VR service is maximized and the robustness of the system is improved.
In order to solve the problems, the invention provides a VR service unmanned aerial vehicle edge calculation method based on deep learning, which comprises the following steps:
s1: rendering in an unmanned aerial vehicle MEC system through a preset VR rendering mode, wherein the unmanned aerial vehicle MEC system comprises an unmanned aerial vehicle and a plurality of VR devices;
s2: obtaining delay and energy consumption of the VR device in VR service rendering according to the step S1, and determining that the VR service requested by the VR device is finished when the rendering delay does not exceed a set value; optimizing VR service rendering completion rate in T time slots through a preset optimization process, wherein the constraint condition is that the total energy consumption of each VR device is less than or equal to a given threshold;
s3: the method comprises the steps of modeling a preset optimization flow through a Markov decision process, wherein the state of an unmanned aerial vehicle MEC system comprises the energy consumption of VR equipment of a user and the position of the unmanned aerial vehicle, taking actions by the unmanned aerial vehicle MEC system comprise selecting the flight track and the rendering mode of the unmanned aerial vehicle, and obtaining an expected optimal strategy through an MDP optimization target.
In the method, an unmanned aerial vehicle MEC system of VR service is composed of an unmanned aerial vehicle and a plurality of VR devices. Suppose that each time slot in the system is the same length as TmaxUnmanned plane horizontal position coordinates l (t) ═ x (t), y (t)]The flying height is H, and the position of each VR device is cn=[xn,yn]. When the distance from the VR device to the unmanned aerial vehicle does not exceed the coverage radius R, | | l (t) -cnR is less than or equal to | R, and the unmanned aerial vehicle can provide service for VR equipment. Wherein b isn(t) epsilon {0,1} represents the association state of the device n and the unmanned aerial vehicle, and if the association is 1, the association is 0 otherwise. The method comprises the steps that three VR rendering modes are provided for a user, wherein the three VR rendering modes comprise a local rendering mode, a remote rendering mode, a local and remote joint rendering mode and a non-rendering mode, and VR equipment cannot render a requested VR task in the non-rendering mode. RenderingDye delay set as
Figure BDA0003519087250000021
I.e. 10 times larger than the delay threshold of the VR task. The energy consumption of the VR device is zero,
Figure BDA0003519087250000022
and under the constraint of VR service characteristics and VR equipment energy, performing combined optimization on the flight path of the unmanned aerial vehicle and a VR rendering mode, and maximizing the rendering completion rate of the VR task. Modeling the problem as a Markov decision process, scheduling the unmanned aerial vehicle and selecting a VR rendering mode by a double-delay depth certainty strategy gradient algorithm under the framework of deep reinforcement learning, and finding an optimal strategy so as to meet the requirement of randomly arriving VR services.
Further, the preset VR rendering mode in step S1 includes a local rendering mode, a remote rendering mode, and a local and remote joint rendering mode.
Further, the local rendering mode simultaneously renders foreground interaction information and background environment information for each VR device, and a time for completing rendering in a time slot is represented as:
Figure BDA0003519087250000031
where t denotes a time slot, n denotes a VR device, γnRepresenting the computing power of VR device n;
Figure BDA0003519087250000032
rendering data representing foreground interaction information required by a VR device n for generating a service in a time slot t, wherein the unit is a bit;
Figure BDA0003519087250000033
rendering data representing background environment information required by the VR device n for generating the service in the time slot t, wherein the unit is bit; mu.snRepresenting the CPU cycle required by VR device n to render one bit of data;
the amount of energy consumed to complete rendering in a slot is expressed as:
Figure BDA0003519087250000034
wherein the content of the first and second substances,
Figure BDA0003519087250000035
expressed as a constant.
Further, the drone in the remote rendering mode performs a process comprising the steps of:
s11: acquiring foreground interaction information and background environment information through VR equipment, and transmitting the foreground interaction information and the background environment information to the unmanned aerial vehicle;
s12: rendering foreground interaction information and background environment information through an unmanned aerial vehicle (MEC) system;
s13: compressing and coding the rendered information by the unmanned aerial vehicle and transmitting the information to VR equipment of a user;
s14: the decoding is received and applied by the VR device.
Further, the remote rendering mode completion rendering time is expressed as:
Figure BDA0003519087250000036
wherein the content of the first and second substances,
Figure BDA0003519087250000037
represents the time required for the VR device to upload foreground interaction information to the drone,
Figure BDA0003519087250000038
representing the rendering processing time of the drone MEC system,
Figure BDA0003519087250000039
representing the drone code compression time,
Figure BDA00035190872500000310
representing the time required for the drone to transmit rendered information to the user's VR device,
Figure BDA0003519087250000041
representing the time required for decoding by the VR device, and n representing the VR device;
the energy consumption amount for completing the rendering in the remote rendering mode is represented as:
Figure BDA0003519087250000042
wherein the content of the first and second substances,
Figure BDA0003519087250000043
energy consumed by VR devices representing users to upload foreground interaction information to the drone,
Figure BDA0003519087250000044
represents the energy consumed by the VR device in decoding;
in uplink transmission, foreground interaction information is transmitted by adopting a Sub-6GHz frequency band, so that the receiving signal-to-noise ratio of VR equipment on an unmanned aerial vehicle can be obtained
Figure BDA0003519087250000045
Comprises the following steps:
Figure BDA0003519087250000046
wherein the content of the first and second substances,
Figure BDA0003519087250000047
representing the transmission power of the VR device, Bn(t) indicates the bandwidth of the VR device under the time slot, gn(t) expressed as small-scale fading channel gain between the VR device and the drone,
Figure BDA0003519087250000048
representing large scale fading effects between the VR device and the drone;
the large scale fading effects represent a distance function of
Figure BDA0003519087250000049
Wherein, betaupRepresenting a constant related to the frequency of the VR device, aupRepresents a path loss exponent; dn(t) represents the distance between the drone and the VR device in the time slot, N0Representing white noise power;
the VR equipment shares the uplink bandwidth in a frequency division multiplexing mode, and the uploading data rate of the VR equipment at the time slot t is determined according to a Shannon capacity formula
Figure BDA00035190872500000410
Comprises the following steps:
Figure BDA00035190872500000411
wherein, BupWhich represents the bandwidth of the channel and,
Figure BDA00035190872500000412
representing the sum of the number of VR devices associated with the unmanned aerial vehicle at the time slot t;
the unmanned aerial vehicle rendering processing time delay is as follows:
Figure BDA00035190872500000413
wherein the content of the first and second substances,
Figure BDA00035190872500000414
representing the size of foreground interaction information transmitted to the unmanned aerial vehicle by the VR equipment of the user;
the uplink transmission energy consumption is as follows:
Figure BDA00035190872500000415
unmanned rendering processing delay of
Figure BDA0003519087250000051
Wherein, γuavRepresenting the computational power of the drone, muuavRepresenting the CPU cycles required for the drone to render one bit of data,
Figure BDA0003519087250000052
which represents the rendering of the content, and,
Figure BDA0003519087250000053
representing the computing resources to which each associated VR device is allocated;
in code compression and downlink transmission, the delay required for compressing rendered information by the MEC system of the unmanned aerial vehicle is as follows:
Figure BDA0003519087250000054
wherein the content of the first and second substances,
Figure BDA0003519087250000055
indicating the size of the compressed data information;
in data decoding, the decoding delay of the VR device receiving the encoded rendering information transmitted by the unmanned aerial vehicle is:
Figure BDA0003519087250000056
the data decoded by the VR equipment is data information obtained by downlink transmission, and the energy consumed by the VR equipment in decoding
Figure BDA0003519087250000057
In the above method, it is assumed that the downlink wireless channel is a line-of-sight link, small-scale fading is ignored, and the downlink transmission rate and the downlink transmission delay are respectively expressed as:
Figure BDA0003519087250000058
the received signal-to-noise ratio of the VR device is:
Figure BDA0003519087250000059
Puavrepresenting the transmission power of the drone, hn(t) represents the antenna gain corresponding to beamforming,
Figure BDA00035190872500000510
represents the path loss, betadownRepresents a frequency dependent constant, adownRepresenting the path loss exponent.
Further, the local and remote joint rendering mode is to perform foreground interaction information rendering on the VR device and perform background environment rendering on the drone, and the rendering completion time of the local and remote joint rendering mode is expressed as:
Figure BDA0003519087250000061
wherein the content of the first and second substances,
Figure BDA0003519087250000062
data representing the time required for rendering locally, from foreground interaction information
Figure BDA0003519087250000063
Replacing in remote rendering
Figure BDA0003519087250000064
Calculating to obtain;
Figure BDA0003519087250000065
representing the time required for remote rendering; data of background environment information
Figure BDA0003519087250000066
Replacement of
Figure BDA0003519087250000067
Calculating by substitution;
Figure BDA0003519087250000068
and representing the time delay of the VR device for rendering and integrating the foreground interaction information and the background environment information, wherein the total energy consumption is represented as:
Figure BDA0003519087250000069
wherein the content of the first and second substances,
Figure BDA00035190872500000610
representing the energy consumed by the VR device for integrating the rendering of the foreground interaction information and the background environment information.
Further, the delay and the energy consumption amount in the step S2 are expressed as:
Figure BDA00035190872500000611
Figure BDA00035190872500000612
by a binary parameter deltan(t) represents whether the VR service rendering of the VR device is completed, and is represented as:
Figure BDA00035190872500000613
through ηn(t) is equal to {0,1} and represents whether VR service request exists in VR equipment at time slot t, etan(t) < 1 > indicates a request, ηn(t) ═ 0 denotes no request; then the rendering completion rate of all VR devices at each time slot is expressed as:
Figure BDA00035190872500000614
the preset optimization flow is expressed as:
Figure BDA00035190872500000615
Wherein, the unmanned aerial vehicle track L ═ L (1), L, L (T)]And selection of user rendering mode O ═ O1(1),L,oN(1),L,o1(T),L,oN(T)]Is an optimization variable, the optimization target is VR service rendering completion rate in T time slots, and the constraint condition is that the total energy consumption of each VR device is less than or equal to a given threshold value Eth
In the above process, I{x}An indicator function is represented which equals 1 when x is true and 0 otherwise. When the VR rendering task requested by the VR equipment does not exceed T in the rendering delaymaxIs deemed complete.
Further, the markov decision process in step S3 models the preset optimization process, and the VR device energy consumption amount of the user is represented as:
e(t)=[e1(t),L en(t),L eN(t)]∈[0,Emax]N
the drone position is represented as:
l(t)=[x(t),y(t)]:
wherein EmaxIs the starting energy of each VR device;
the flight trajectory of the drone is represented as:
d(t)=(k1(t),k2(t));k1(t)∈[0,2π],k2(t)∈[0,Dmax];
wherein k is1(t) indicates the flight direction of the drone, k2(t) represents the unmanned aerial vehicle flight distance; dmaxRepresenting the maximum distance of flight of the unmanned aerial vehicle in each time slot; the rendering mode is represented as:
O(t)=[o1(t),L,oN(t)];
the update process of the MEC system status of the drone is represented as:
l(t+1)=l(t)+[k2(t)·cos(k1(t)),k2(t)·sin(k1(t))];
according to the selection of the preset VR rendering mode, the energy updating process of the VR equipment of the user is represented as follows:
en(t+1)=en(t)-En(t)。
in the method, the Markov decision process modeling the preset optimization process further comprises a reward function, and the reward function is used for obtaining an expected optimal strategy through interaction with the environment. Under the given state and action s (t) of the time slot t, a (t), the reward function is defined as VR service rendering completion degree plus penalty term, and is represented as follows: r (t) + g (t), wherein g (t) is used for guiding the drone MEC system to adopt a better strategy, increasing long-term return, adding a penalty when taking action to cause the drone MEC system not to meet the constraint condition, namely g (t) < 0, and if the constraint condition is met, g (t) ═ 0. The long-term average reward function obtained by the system by adopting the strategy pi can be expressed as:
Figure BDA0003519087250000071
wherein E isπRepresents the expectation under strategy pi, gamma ∈ [0,1 ]]Represents a discount factor, rt(st,at) Indicating an instant prize. The MDP optimization goal is to find the long-term optimal strategy pi*Maximizing long-term return.
The technical scheme adopted by the invention has the following beneficial effects:
according to the invention, through Markov decision process modeling and a depth reinforcement learning TD3 (double-delay depth deterministic strategy gradient) algorithm, the flight trajectory of the unmanned aerial vehicle and a VR service preset rendering mode are optimized and configured in a combined manner, rendering tasks borne by VR equipment and the unmanned aerial vehicle are optimized and configured, computing resources of the unmanned aerial vehicle are reasonably distributed to covered VR users for a long time, the utilization rate of the computing resources is maximized, and therefore the VR service rendering completion rate is maximized.
Drawings
Fig. 1 is a first flowchart of a VR service unmanned aerial vehicle edge calculation method based on deep learning according to an embodiment of the present invention;
fig. 2 is a flowchart of a VR service unmanned aerial vehicle edge calculation method based on deep learning according to an embodiment of the present invention;
fig. 3 is a schematic diagram comparing convergence of the TD3 algorithm and the DDPG algorithm in the VR service unmanned aerial vehicle edge calculation method based on deep learning according to the embodiment of the present invention;
fig. 4 is a schematic diagram of the VR service unmanned aerial vehicle edge calculation method based on deep learning according to the embodiment of the present invention, based on the TD3 algorithm, in different rendering modes;
fig. 5 is a schematic diagram illustrating a relationship between VR service completion rate and unmanned aerial vehicle computing power in a VR service unmanned aerial vehicle edge computing method based on deep learning according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a relationship between VR service completion rate and user number in the VR service unmanned aerial vehicle edge calculation method based on deep learning according to the embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
The following are specific embodiments of the present invention and are further described with reference to the drawings, but the present invention is not limited to these embodiments.
Examples
The embodiment provides a VR service unmanned aerial vehicle edge calculation method based on deep learning, as shown in fig. 1 and 2, the method includes the steps of:
s1: rendering in an unmanned aerial vehicle MEC system through a preset VR rendering mode, wherein the unmanned aerial vehicle MEC system comprises an unmanned aerial vehicle and a plurality of VR devices;
s2: obtaining delay and energy consumption of the VR device in VR service rendering according to the step S1, and determining that the VR service requested by the VR device is finished when the rendering delay does not exceed a set value; optimizing VR service rendering completion rate in T time slots through a preset optimization process, wherein the constraint condition is that the total energy consumption of each VR device is less than or equal to a given threshold;
s3: the method comprises the steps of modeling a preset optimization flow through a Markov decision process, wherein the state of an unmanned aerial vehicle MEC system comprises the energy consumption of VR equipment of a user and the position of the unmanned aerial vehicle, taking actions by the unmanned aerial vehicle MEC system comprise selecting the flight track and the rendering mode of the unmanned aerial vehicle, and obtaining an expected optimal strategy through an MDP optimization target.
Specifically, in the unmanned aerial vehicle MEC system of VR business, constitute by an unmanned aerial vehicle and a plurality of VR equipment. Suppose that each time slot in the system is the same length as TmaxUnmanned plane horizontal position coordinates l (t) ═ x (t), y (t)]The flying height is H, and the position of each VR device is cn=[xn,yn]. When the distance from the VR device to the unmanned aerial vehicle does not exceed the coverage radius R, | | l (t) -cnR is less than or equal to | R, and the unmanned aerial vehicle can provide service for VR equipment. Wherein b isn(t) e {0,1} represents the association status of device n with the drone, association being 1 and conversely 0. The method comprises the steps that three VR rendering modes are provided for a user, wherein the three VR rendering modes comprise a local rendering mode, a remote rendering mode, a local and remote joint rendering mode and a non-rendering mode, and VR equipment cannot render a requested VR task in the non-rendering mode. Rendering delay is set to
Figure BDA0003519087250000091
I.e. 10 times larger than the delay threshold of the VR task. The energy consumption of the VR device is zero,
Figure BDA0003519087250000092
and under the constraint of VR service characteristics and VR equipment energy, performing combined optimization on the flight path of the unmanned aerial vehicle and a VR rendering mode, and maximizing the rendering completion rate of the VR task. Modeling the problem as a Markov decision process, scheduling the unmanned aerial vehicle and selecting a VR rendering mode by a double-delay depth certainty strategy gradient algorithm under the framework of deep reinforcement learning, and finding an optimal strategy so as to meet the requirement of randomly arriving VR services.
The preset VR rendering mode in step S1 includes a local rendering mode, a remote rendering mode, and a local and remote joint rendering mode.
Wherein, the local rendering mode simultaneously renders foreground interaction information and background environment information for each VR device, and the rendering completion time in the time slot is represented as:
Figure BDA0003519087250000101
where t denotes a time slot, n denotes a VR device, γnRepresenting the computing power of VR device n;
Figure BDA0003519087250000102
rendering data representing foreground interaction information required by a VR device n for generating a service in a time slot t, wherein the unit is a bit;
Figure BDA0003519087250000103
rendering data representing background environment information required by the VR device n to generate a service in a time slot t, wherein the unit is bit; mu.snRepresenting the CPU cycle required by VR device n to render one bit of data;
the amount of energy consumed to complete rendering in a slot is expressed as:
Figure BDA0003519087250000104
wherein the content of the first and second substances,
Figure BDA0003519087250000105
expressed as a constant.
Referring to fig. 2, in the remote rendering mode, the execution of the drone includes the steps of:
s11: acquiring foreground interaction information and background environment information through VR equipment, and transmitting the foreground interaction information and the background environment information to the unmanned aerial vehicle;
s12: rendering foreground interaction information and background environment information through an unmanned aerial vehicle (MEC) system;
s13: compressing and coding the rendered information by the unmanned aerial vehicle and transmitting the information to VR equipment of a user;
s14: the decoding is received and applied by the VR device.
Wherein, the rendering completion time of the remote rendering mode is represented as:
Figure BDA0003519087250000106
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003519087250000107
represents the time required for the VR device to upload foreground interaction information to the drone,
Figure BDA0003519087250000108
representing the rendering processing time of the drone MEC system,
Figure BDA0003519087250000109
representing the drone code compression time,
Figure BDA00035190872500001010
representing the time required for the drone to transmit rendered information to the user's VR device,
Figure BDA00035190872500001011
representing the time required for decoding by the VR device, and n representing the VR device;
the energy consumption amount to complete rendering in the remote rendering mode is represented as:
Figure BDA00035190872500001012
wherein the content of the first and second substances,
Figure BDA00035190872500001013
energy consumed by VR devices representing users to upload foreground interaction information to the drone,
Figure BDA0003519087250000111
represents the energy consumed by the VR device in decoding;
in uplink transmission, foreground interaction information is transmitted by adopting a Sub-6GHz frequency band, so that the receiving signal-to-noise ratio of VR equipment at the unmanned aerial vehicle can be obtained
Figure BDA0003519087250000112
Comprises the following steps:
Figure BDA0003519087250000113
wherein the content of the first and second substances,
Figure BDA0003519087250000114
representing the transmission power of the VR device, Bn(t) denotes the bandwidth of the VR device under the time slot, gn(t) expressed as small-scale fading channel gain between the VR device and the drone,
Figure BDA0003519087250000115
representing large scale fading effects between the VR device and the drone;
the large scale fading effects represent a distance function of
Figure BDA0003519087250000116
Wherein, betaupRepresenting a constant related to the frequency of the VR device, aupRepresents a path loss exponent; d is a radical ofn(t) represents the distance between the drone and the VR device in the time slot, N0Representing white noise power;
the VR equipment shares the uplink bandwidth by adopting a frequency division multiplexing mode, and the uploading data rate of the VR equipment at the time slot t according to a Shannon capacity formula
Figure BDA0003519087250000117
Comprises the following steps:
Figure BDA0003519087250000118
wherein, BupRepresenting channel bandwidth,
Figure BDA0003519087250000119
Representing the sum of the number of VR devices associated with the unmanned aerial vehicle at the time slot t;
the unmanned aerial vehicle rendering processing time delay is as follows:
Figure BDA00035190872500001110
wherein the content of the first and second substances,
Figure BDA00035190872500001111
representing the size of foreground interaction information transmitted to the unmanned aerial vehicle by the VR equipment of the user;
the uplink transmission energy consumption is as follows:
Figure BDA00035190872500001112
unmanned aerial vehicle rendering processing delay of
Figure BDA00035190872500001113
Wherein, γuavRepresenting the computational power of the drone, muuavRepresenting the CPU cycles required for the drone to render one bit of data,
Figure BDA00035190872500001114
which represents the content of the rendering, and,
Figure BDA00035190872500001115
representing the computing resources to which each associated VR device is allocated;
in code compression and downlink transmission, the delay required for compressing rendered information by the MEC system of the unmanned aerial vehicle is as follows:
Figure BDA0003519087250000121
wherein the content of the first and second substances,
Figure BDA0003519087250000122
representing compressed data information;
in data decoding, the decoding delay of the VR device receiving the encoded rendering information transmitted by the unmanned aerial vehicle is:
Figure BDA0003519087250000123
the data decoded by the VR equipment is data information obtained by downlink transmission, and the energy consumed by the VR equipment in decoding
Figure BDA0003519087250000124
Specifically, assuming that the downlink wireless channel is a line-of-sight link, ignoring small-scale fading, the downlink transmission rate and the downlink transmission delay are respectively expressed as:
Figure BDA0003519087250000125
the received signal-to-noise ratio of the VR device is:
Figure BDA0003519087250000126
Puavrepresenting the transmission power of the drone, hn(t) represents the antenna gain corresponding to beamforming,
Figure BDA0003519087250000127
represents the path loss, betadownRepresenting a constant related to frequency, adownRepresenting the path loss exponent.
Wherein, local and remote joint rendering mode carries out the mutual information rendering of prospect on VR equipment, carries out the background environment on unmanned aerial vehicle and renders, and local and remote joint rendering mode accomplishes the rendering time and expresses as:
Figure BDA0003519087250000128
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003519087250000129
data representing the time required for rendering locally, from foreground interaction information
Figure BDA00035190872500001210
Replacing in remote rendering
Figure BDA00035190872500001211
Calculating to obtain;
Figure BDA00035190872500001212
representing the time required for remote rendering; data of background environment information
Figure BDA00035190872500001213
Replacement of
Figure BDA00035190872500001214
Calculating by substitution;
Figure BDA00035190872500001215
and representing the time delay of the VR device for rendering and integrating the foreground interaction information and the background environment information, wherein the total energy consumption is represented as:
Figure BDA0003519087250000131
wherein the content of the first and second substances,
Figure BDA0003519087250000132
representing the energy consumed by the VR device for integrating foreground interaction information and background environment information rendering.
Wherein the delay and the energy consumption amount in step S2 are expressed as:
Figure BDA0003519087250000133
Figure BDA0003519087250000134
by a binary parameter deltan(t) represents whether the VR service rendering of the VR device is completed, and is represented as:
Figure BDA0003519087250000135
through ηn(t) is equal to {0,1} and represents whether VR service request exists in VR equipment at time slot t, etan(t) < 1 > indicates a request, ηn(t) ═ 0 denotes no request; then the rendering completion rate of all VR devices at each time slot is expressed as:
Figure BDA0003519087250000136
the preset optimization flow is represented as:
Figure BDA0003519087250000137
wherein, the unmanned aerial vehicle track L ═ L (1), L, L (T)]And selection of user rendering mode O ═ O1(1),L,oN(1),L,o1(T),L,oN(T)]Is an optimization variable, the optimization target is VR service rendering completion rate in T time slots, and the constraint condition is that the total energy consumption of each VR device is less than or equal to a given threshold value Eth
In step S3, the markov decision process models the preset optimization process, and the energy consumption of the VR device of the user is represented as:
e(t)=[e1(t),L en(t),L eN(t)]∈[0,Emax]N
the drone position is represented as:
l(t)=[x(t),y(t)]:
wherein EmaxIs the starting energy of each VR device;
the flight trajectory of the drone is represented as:
d(t)=(k1(t),k2(t));k1(t)∈[0,2π],k2(t)∈[0,Dmax];
wherein k is1(t) indicates the flight direction of the drone, k2(t) represents the unmanned aerial vehicle flight distance; dmaxRepresenting the maximum distance of flight of the unmanned aerial vehicle in each time slot; the rendering mode is represented as:
O(t)=[o1(t),L,oN(t)];
the update process of the MEC system status of the drone is represented as:
l(t+1)=l(t)+[k2(t)·cos(k1(t)),k2(t)·sin(k1(t))];
according to the selection of the preset VR rendering mode, the energy updating process of the VR equipment of the user is represented as follows:
en(t+1)=en(t)-En(t)。
specifically, the markov decision process modeling the preset optimization process further includes a reward function, and the reward function is used for obtaining an expected optimal strategy through interaction with the environment. In a given state and action s (t) of a time slot t, a (t), a bonus function is defined as VR service rendering completion degree plus penalty, which is expressed as follows: and r (t) + g (t), wherein g (t) is used for guiding the drone MEC system to adopt a better strategy, increasing the long-term return, adding a penalty when an action is taken to cause the drone MEC system not to meet the constraint condition, namely g (t) < 0, and if the constraint condition is met, g (t) < 0. The long-term average reward function obtained by the system by adopting the strategy pi can be expressed as:
Figure BDA0003519087250000141
wherein E isπRepresents the expectation under strategy π, γ ∈ [0,1 ]]Represents a discount factor, rt(st,at) Indicating an instant prize. The MDP optimization goal is to find the long-term optimal strategy pi*To achieve long-term returnAnd max.
Specifically, the MDP problem is solved by a policy iteration or value iteration method, and the computational complexity depends on the scale of the problem, i.e., the size of the state and the action space. The unmanned aerial vehicle MEC system researched by the invention has larger rendering problem scale, and a depth reinforcement learning TD3(Twin Delayed Deep Deterministic Policy Gradient) algorithm is adopted. The algorithm solves the problems: first, initialize Critic network
Figure BDA0003519087250000142
And parameters in an Actor network
Figure BDA0003519087250000143
Theta; initializing a target network
Figure BDA0003519087250000144
Initializing an experience playback pool B; next, the following steps are performed for each epsilon loop: initialize drone location l (t) ═ x (t), y (t)]And device location cn=[xn,yn]And the equipment energy consumption en(t), N is 1, N; the following steps are executed for each time slot T-1: T cycle: selection action at=πθ(st) + epsilon, wherein epsilon belongs to N (0, xi), and obtaining the flight direction k of the unmanned aerial vehicle1(t), distance k2(t) and rendering mode O (t); performing action atInteracting with the environment to obtain a reward rtAn empirical sample(s)t,at,rt,st+1) Storing the data into an experience playback pool B; randomly taking a small batch of M sampling experience samples from an experience playback pool; calculating the expected reward of action by the Critic target network:
Figure BDA0003519087250000151
wherein a% ← piθ′(s') + oa, oa (N (0, ξ), -c, c) is a clipped normal noise; updating critical network parameters:
Figure BDA0003519087250000152
and updating the Actor network parameter theta by the following strategy gradient every d steps:
Figure BDA0003519087250000153
soft update target network:
Figure BDA0003519087250000154
θ′←τθi+(1-τ)θ′;
wherein tau represents a soft update rate factor, and when tau is larger, the network parameter is estimated
Figure BDA0003519087250000155
And a policy network parameter theta to a target network parameter
Figure BDA0003519087250000156
The faster the transfer speed of θ'; and continuously executing the steps until the epsilon loop is ended.
Specifically, due to the fact that energy and computing power of the portable VR equipment are extremely limited, task rendering cannot be completed locally in real time, different modes such as local rendering, remote rendering and collaborative rendering are adopted in the method to meet requirements of different VR services, rendering tasks born by VR equipment and the unmanned aerial vehicle are configured in an optimized mode, computing power resources of the unmanned aerial vehicle are reasonably distributed to covered VR users for a long time, and the computing power resource utilization rate is maximized. The method researches the mobile computing resource management problem of the MEC platform of the unmanned aerial vehicle, performs combined optimization on the movement track and the service rendering mode of the unmanned aerial vehicle, considers the network computing resource and equipment energy consumption constraint, and establishes a Markov decision process by taking the maximized VR user service rendering completion rate as an optimization target.
Preferably, the target area is 300m × 300m, 20 VR devices are randomly distributed, and 1 unmanned aerial vehicle is deployed as an aerial computing platform. Each VR device may request one of four VR rendering tasks, following a bernoulli distribution with a parameter p of 0.95.
As shown in fig. 3 to fig. 6, in order to verify the feasibility and the effectiveness of the method of the present invention, after the simulation test performed on the method of the present invention, the results are as follows:
referring to fig. 3, two deep reinforcement learning algorithms are compared: the convergence performance of the two deep reinforcement learning algorithms is compared. TD3 (dual delay depth deterministic policy gradient) based algorithms and DDPG based algorithms. The algorithm based on TD3 converges faster than DDPG algorithm, and VR rendering completion rate is higher.
Referring to fig. 4, the convergence speed of the present invention when selecting different VR rendering modes is shown. It can be seen that when the VR rendering mode is increased, the convergence speed may be slowed because the dimension of the action space is significantly increased, but the VR service rendering completion rate may be increased. The completion rate of the algorithm of the local rendering and remote rendering modes after convergence is almost the same as that of the algorithm provided by the method.
Referring to fig. 5, a VR rendering completion rate curve at different computing capabilities is presented. The results show that the algorithm shows the best performance among the three rendering options. The whole computing resources of the whole unmanned aerial vehicle and all VR devices can be effectively scheduled, and long-term rendering is carried out on VR tasks.
Referring to fig. 6, VR traffic rendering completion rates at different network scales are considered. The result shows that the VR rendering completion rates of the three rendering modes all show a descending trend along with the increase of the number of the VR devices. This is because, with limited communication and computing resources, the computing services provided for each virtual reality device are significantly reduced as more virtual reality devices and their rendering tasks are involved. However, in four rendering modes, the algorithm proposed by the present invention performs better than the other two rendering modes.
Specifically, under a deep reinforcement learning framework, the method provides a solution based on the TD3 algorithm, and the solution has better adaptability and expansibility for different user service characteristics and network scales. Simulation results show that: the method has better performance in the aspects of VR user service rendering success rate, convergence time and the like.
According to the method, under the constraint of VR service characteristics and equipment energy, the flight path of the unmanned aerial vehicle and the VR rendering mode are jointly optimized, and the rendering completion rate of VR tasks is maximized. Modeling the problem as a Markov decision process, scheduling the unmanned aerial vehicle and selecting a VR rendering mode by a TD3(Twin Delayed depth Deterministic Policy Gradient) algorithm under the framework of depth reinforcement learning, and finding an optimal strategy so as to meet the requirement of randomly-arrived VR service.
According to the method, the flight path of the unmanned aerial vehicle and the VR service preset rendering mode are optimized through Markov decision process modeling and depth reinforcement learning dual-delay depth certainty strategy gradient algorithm, rendering tasks borne by VR equipment and the unmanned aerial vehicle are optimized and configured, computing resources of the unmanned aerial vehicle are reasonably distributed to covered VR users for a long time, the utilization rate of the computing resources is maximized, and therefore the VR service rendering completion rate is maximized.
Although the present disclosure has been described above, the scope of the present disclosure is not limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present disclosure, and such changes and modifications will fall within the scope of the present invention.

Claims (8)

1. A VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning is characterized by comprising the following steps:
s1: rendering in an unmanned aerial vehicle MEC system through a preset VR rendering mode, wherein the unmanned aerial vehicle MEC system comprises an unmanned aerial vehicle and a plurality of VR devices;
s2: obtaining delay and energy consumption of the VR device in VR service rendering according to the step S1, and determining that the VR service requested by the VR device is finished when the rendering delay does not exceed a set value; optimizing VR service rendering completion rate in T time slots through a preset optimization process, wherein the constraint condition is that the total energy consumption of each VR device is less than or equal to a given threshold;
s3: the method comprises the steps of modeling a preset optimization flow through a Markov decision process, wherein the state of an unmanned aerial vehicle MEC system comprises the energy consumption of VR equipment of a user and the position of the unmanned aerial vehicle, taking actions by the unmanned aerial vehicle MEC system comprise selecting the flight track and the rendering mode of the unmanned aerial vehicle, and obtaining an expected optimal strategy through an MDP optimization target.
2. The deep learning based VR business drone edge computing method of claim 1, wherein the preset VR rendering modes in step S1 include a local rendering mode, a remote rendering mode, and a local and remote joint rendering mode.
3. The deep learning based VR service drone edge computation method of claim 2, wherein the local rendering mode simultaneously renders foreground interaction information and background environment information for each VR device, and a time to complete rendering in a time slot is represented as:
Figure FDA0003519087240000011
where t denotes a time slot, n denotes a VR device, γnRepresenting the computing power of VR device n;
Figure FDA0003519087240000012
rendering data representing foreground interaction information required by a VR device n for generating a service in a time slot t, wherein the unit is a bit;
Figure FDA0003519087240000013
rendering data representing background environment information required by the VR device n for generating the service in the time slot t, wherein the unit is bit; mu.snRepresenting the CPU cycle required by VR device n to render one bit of data;
the amount of energy consumed to complete rendering in a slot is expressed as:
Figure FDA0003519087240000021
wherein the content of the first and second substances,
Figure FDA0003519087240000022
expressed as a constant.
4. The deep learning based VR business drone edge computation method of claim 2, wherein the drone executes in the remote rendering mode comprising the steps of:
s11: acquiring foreground interaction information and background environment information through VR equipment, and transmitting the foreground interaction information and the background environment information to the unmanned aerial vehicle;
s12: rendering foreground interaction information and background environment information through an unmanned aerial vehicle (MEC) system;
s13: compressing and coding the rendered information by the unmanned aerial vehicle and transmitting the information to VR equipment of a user;
s14: the decoding is received and applied by the VR device.
5. The deep learning based VR business drone edge computation method of claim 4, wherein the remote rendering mode completion rendering time is expressed as:
Figure FDA0003519087240000023
wherein the content of the first and second substances,
Figure FDA0003519087240000024
represents the time required for the VR device to upload foreground interaction information to the drone,
Figure FDA0003519087240000025
representing the rendering processing time of the drone MEC system,
Figure FDA0003519087240000026
representing the drone code compression time,
Figure FDA0003519087240000027
represents the time required for the drone to transmit rendered information to the user's VR device,
Figure FDA0003519087240000028
representing the time required for decoding by the VR device, and n representing the VR device;
the energy consumption amount for completing the rendering in the remote rendering mode is represented as:
Figure FDA0003519087240000029
wherein the content of the first and second substances,
Figure FDA00035190872400000210
energy consumed by VR devices representing users to upload foreground interaction information to the drone,
Figure FDA00035190872400000211
represents the energy consumed by the VR device in decoding;
in uplink transmission, foreground interaction information is transmitted by adopting a Sub-6GHz frequency band, so that the receiving signal-to-noise ratio of VR equipment at the unmanned aerial vehicle can be obtained
Figure FDA00035190872400000212
Comprises the following steps:
Figure FDA00035190872400000213
wherein the content of the first and second substances,
Figure FDA00035190872400000215
representing the transmission power of the VR device, Bn(t) denotes the bandwidth of the VR device under the time slot, gn(t) is represented by VSmall scale fading channel gain between the R device and the drone,
Figure FDA00035190872400000214
representing large scale fading effects between the VR device and the drone;
the large scale fading effect represents a distance function of:
Figure FDA0003519087240000031
wherein, betaupRepresenting a constant related to the frequency of the VR device, aupRepresents a path loss exponent; dn(t) represents the distance between the drone and the VR device in the time slot, N0Representing white noise power;
the VR equipment shares the uplink bandwidth by adopting a frequency division multiplexing mode, and the uploading data rate of the VR equipment at the time slot t according to a Shannon capacity formula
Figure FDA0003519087240000032
Comprises the following steps:
Figure FDA0003519087240000033
wherein, BupWhich represents the bandwidth of the channel and,
Figure FDA0003519087240000034
representing the sum of the number of VR devices associated with the unmanned aerial vehicle at the time slot t;
the unmanned aerial vehicle rendering processing time delay is as follows:
Figure FDA0003519087240000035
wherein the content of the first and second substances,
Figure FDA0003519087240000036
representing the size of foreground interaction information transmitted to the unmanned aerial vehicle by the VR equipment of the user;
the uplink transmission energy consumption is as follows:
Figure FDA0003519087240000037
the unmanned aerial vehicle rendering processing time delay is as follows:
Figure FDA0003519087240000038
wherein, γuavRepresenting the computational power of the drone, muuavRepresenting the CPU cycles required for the drone to render one bit of data,
Figure FDA0003519087240000039
which represents the content of the rendering, and,
Figure FDA00035190872400000310
representing the computing resources to which each associated VR device is allocated;
in code compression and downlink transmission, the delay required for compressing rendered information by the MEC system of the unmanned aerial vehicle is as follows:
Figure FDA00035190872400000311
wherein the content of the first and second substances,
Figure FDA0003519087240000041
representing compressed data information;
in data decoding, the decoding delay of the VR device receiving the encoded rendering information transmitted by the unmanned aerial vehicle is:
Figure FDA0003519087240000042
the data decoded by the VR equipment is data information obtained by downlink transmission, and the energy consumed by the VR equipment in decoding
Figure FDA0003519087240000043
6. The method of claim 5, wherein the local and remote joint rendering modes are foreground interaction information rendering on the VR device and background environment rendering on the UAV, and wherein the local and remote joint rendering modes complete rendering time expressed as:
Figure FDA0003519087240000044
wherein the content of the first and second substances,
Figure FDA0003519087240000045
data representing the time required for rendering locally, from foreground interaction information
Figure FDA0003519087240000046
Replacing in remote rendering
Figure FDA0003519087240000047
Calculating to obtain;
Figure FDA0003519087240000048
representing the time required for remote rendering; data of background environment information
Figure FDA0003519087240000049
Replacement of
Figure FDA00035190872400000410
Calculating by substitution;
Figure FDA00035190872400000411
and representing the time delay of the VR device for rendering and integrating the foreground interaction information and the background environment information, wherein the total energy consumption is represented as:
Figure FDA00035190872400000412
wherein the content of the first and second substances,
Figure FDA00035190872400000413
representing the energy consumed by the VR device for integrating foreground interaction information and background environment information rendering.
7. The deep learning based VR business drone edge computation method of claim 1, wherein the delay and energy consumption in step S2 are expressed as:
Figure FDA00035190872400000414
Figure FDA00035190872400000415
by a binary parameter deltan(t) represents whether the VR service rendering of the VR device is completed, and is represented as:
Figure FDA00035190872400000416
through ηn(t) is equal to {0,1} and represents whether VR service request exists in VR equipment at time slot t, etan(t) < 1 > indicates a request, ηn(t) ═ 0 denotes no request; then the rendering completion rate of all VR devices at each time slot is expressed as:
Figure FDA0003519087240000051
Figure FDA0003519087240000052
the preset optimization flow is represented as:
Figure FDA0003519087240000053
wherein, the unmanned aerial vehicle track L ═ L (1), L, L (T)]And selection of user rendering mode O ═ O1(1),L,oN(1),L,o1(T),L,oN(T)]Is an optimization variable, the optimization target is VR service rendering completion rate in T time slots, and the constraint condition is that the total energy consumption of each VR device is less than or equal to a given threshold value Eth
8. The deep learning based VR business drone edge computing method of claim 1, wherein the Markov decision process in step S3 models a preset optimization process, and VR device energy consumption of the user is expressed as:
e(t)=[e1(t),L en(t),L eN(t)]∈[0,Emax]N
the drone position is represented as:
l(t)=[x(t),y(t)];
wherein EmaxIs the starting energy of each VR device;
the flight trajectory of the drone is represented as:
d(t)=(k1(t),k2(t));k1(t)∈[0,2π],k2(t)∈[0,Dmax];
wherein k is1(t) indicates the flight direction of the drone, k2(t) represents the unmanned aerial vehicle flight distance; dmaxRepresenting the maximum distance of flight of the unmanned aerial vehicle in each time slot; the rendering mode is represented as:
O(t)=[o1(t),L,oN(t)];
the update process of the MEC system status of the drone is represented as:
l(t+1)=l(t)+[k2(t)·cos(k1(t)),k2(t)·sin(k1(t))];
according to the selection of the preset VR rendering mode, the energy updating process of the VR equipment of the user is represented as follows:
en(t+1)=en(t)-En(t)。
CN202210172797.8A 2022-02-24 2022-02-24 VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning Pending CN114598702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210172797.8A CN114598702A (en) 2022-02-24 2022-02-24 VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210172797.8A CN114598702A (en) 2022-02-24 2022-02-24 VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning

Publications (1)

Publication Number Publication Date
CN114598702A true CN114598702A (en) 2022-06-07

Family

ID=81804677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210172797.8A Pending CN114598702A (en) 2022-02-24 2022-02-24 VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning

Country Status (1)

Country Link
CN (1) CN114598702A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114065963A (en) * 2021-11-04 2022-02-18 湖北工业大学 Computing task unloading method based on deep reinforcement learning in power Internet of things

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114065963A (en) * 2021-11-04 2022-02-18 湖北工业大学 Computing task unloading method based on deep reinforcement learning in power Internet of things

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHENGJIE DING, ETC.: "UAV-enabled Edge Computing for Virtual Reality", 《ACM DIGITAL LIBRARY》, pages 1 - 8 *

Similar Documents

Publication Publication Date Title
CN113950066B (en) Single server part calculation unloading method, system and equipment under mobile edge environment
Cheng et al. Space/aerial-assisted computing offloading for IoT applications: A learning-based approach
Liu et al. Code-partitioning offloading schemes in mobile edge computing for augmented reality
Liu et al. Path planning for UAV-mounted mobile edge computing with deep reinforcement learning
Guo et al. An adaptive wireless virtual reality framework in future wireless networks: A distributed learning approach
CN111800828B (en) Mobile edge computing resource allocation method for ultra-dense network
CN112422644B (en) Method and system for unloading computing tasks, electronic device and storage medium
Nath et al. Multi-user multi-channel computation offloading and resource allocation for mobile edge computing
CN113781002B (en) Low-cost workflow application migration method based on agent model and multiple group optimization in cloud edge cooperative network
WO2022242468A1 (en) Task offloading method and apparatus, scheduling optimization method and apparatus, electronic device, and storage medium
CN113867843B (en) Mobile edge computing task unloading method based on deep reinforcement learning
CN112860429A (en) Cost-efficiency optimization system and method for task unloading in mobile edge computing system
Saravanan et al. Design of deep learning model for radio resource allocation in 5G for massive iot device
Chua et al. Resource allocation for mobile metaverse with the Internet of Vehicles over 6G wireless communications: A deep reinforcement learning approach
CN113573363A (en) MEC calculation unloading and resource allocation method based on deep reinforcement learning
CN113946423B (en) Multi-task edge computing, scheduling and optimizing method based on graph attention network
CN117202264A (en) 5G network slice oriented computing and unloading method in MEC environment
CN114980205A (en) QoE (quality of experience) maximization method and device for multi-antenna unmanned aerial vehicle video transmission system
CN114598702A (en) VR (virtual reality) service unmanned aerial vehicle edge calculation method based on deep learning
CN115580900A (en) Unmanned aerial vehicle assisted cooperative task unloading method based on deep reinforcement learning
Pan et al. Energy-efficient multiuser and multitask computation offloading optimization method
Zhang et al. Cybertwin-driven multi-intelligent reflecting surfaces aided vehicular edge computing leveraged by deep reinforcement learning
CN117891532B (en) Terminal energy efficiency optimization unloading method based on attention multi-index sorting
El Haber et al. Multi-IRS Aided Mobile Edge Computing for High Reliability and Low Latency Services
Wang et al. Computing Resource Allocation Strategy Using Biological Evolutionary Algorithm in UAV‐Assisted Mobile Edge Computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination