CN114189521B - Method for collaborative computing offloading in F-RAN architecture - Google Patents
Method for collaborative computing offloading in F-RAN architecture Download PDFInfo
- Publication number
- CN114189521B CN114189521B CN202111531158.8A CN202111531158A CN114189521B CN 114189521 B CN114189521 B CN 114189521B CN 202111531158 A CN202111531158 A CN 202111531158A CN 114189521 B CN114189521 B CN 114189521B
- Authority
- CN
- China
- Prior art keywords
- task
- tue
- iue
- computing
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 238000005265 energy consumption Methods 0.000 claims abstract description 54
- 238000005457 optimization Methods 0.000 claims abstract description 38
- 238000004891 communication Methods 0.000 claims abstract description 25
- 206010042135 Stomatitis necrotising Diseases 0.000 claims abstract 11
- 201000008585 noma Diseases 0.000 claims abstract 11
- 238000004364 calculation method Methods 0.000 claims description 62
- 230000005540 biological transmission Effects 0.000 claims description 56
- 238000012545 processing Methods 0.000 claims description 11
- 239000013598 vector Substances 0.000 claims description 9
- 238000005516 engineering process Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 241001669696 Butis Species 0.000 claims description 3
- 230000004888 barrier function Effects 0.000 claims description 3
- 238000005192 partition Methods 0.000 claims description 3
- 238000006467 substitution reaction Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000013461 design Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1012—Server selection for load balancing based on compliance of requirements or conditions with available server resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1014—Server selection for load balancing based on the content of a request
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Algebra (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention provides a method for collaborative computing and unloading in an F-RAN architecture, and provides an F-RAN unloading scheme based on NOMA and an unloading method based on SCA, an interior point method and alliance game so as to efficiently utilize computing resources of edge nodes in a network. The offloading scheme is that a task user can offload a computing task to a main F-AP associated with the task user and an idle user with idle computing resources based on NOMA, and the main F-AP further offload the computing task to other auxiliary F-APs based on a cooperative communication function among the F-APs. Meanwhile, under the condition of considering the tolerance time delay of the user, a layered iterative algorithm is provided, wherein the inner layer of the algorithm is combined by the SCA and the interior point method to obtain an unloading scheme after the user association is determined, and the outer layer of the algorithm is user association optimization based on the alliance game theory, so that the total energy consumption of the system is minimized, and compared with the unloading scheme and algorithm commonly used in the prior art, the performance of the system is obviously improved.
Description
Technical Field
The invention belongs to the technical field of communication, and particularly relates to a method for collaborative computing and unloading in an F-RAN architecture, in particular to an F-RAN (fog radio access networks) unloading scheme based on NOMA (non-orthogonal multiple access) and an unloading method based on SCA (successive convex approximation), an interior point method and alliance game.
Background
To meet the critical latency requirements and to alleviate the pressure of the forward link, a system architecture of the F-RAN is proposed. The concept of F-RAN was developed by Cisco's proposed fog calculation by moving part of the calculation, storage and network functions down from the cloud center to the edge of the network, adapting the infrastructure in the Radio Access Network (RAN), such as small base stations, routers etc., to fog nodes with both storage, communication and radio signal processing and resource allocation management, to respond quickly to low latency requests from terminal devices, whereby the F-RAN can provide more timely responses to user requests with less front-end link occupancy. The proposal of the F-RAN system architecture has led to a study of the direction of cooperative computing offloading, which allows a group of neighboring terminal devices and access nodes to jointly complete computing offloading of computation-intensive and delay-sensitive tasks, whereas decentralized computing at the network edge also poses challenges to currently scarce spectrum resources.
Under limited spectrum resources, task offloading based on OMA technology is difficult to complete within a time threshold in a delay sensitive scenario, thereby reducing offloading efficiency. To solve the above problems, a non-orthogonal multiple access technology is proposed as one of key technologies of 5G. Unlike OMA, it allows multiple users to use the same time-frequency resource block, and by using Superposition Coding (SC) principle, signals of multiple users can be transmitted at the transmitter, and Serial Interference Cancellation (SIC) is performed at the receiver to suppress multi-user interference (MUI) caused by the superimposed signals, so as to correctly decode the desired user signals. Compared to OMA technology, NOMA technology can achieve a larger performance gain in terms of spectral efficiency, energy efficiency, and delay performance.
Based on this, the prior art has proposed some improvements.
The method comprises the following steps: in the prior art, a general task offloading scheme of an heterogeneous fog wireless network comprising a plurality of task nodes and a plurality of offloading nodes is proposed in the scheme of POST Parallel Offloading of Splittable Tasks in Heterogeneous Fog Networks, and attention is paid to a partitionable task, namely how to effectively map the plurality of task nodes into the plurality of offloading nodes, so as to minimize the service delay of each task in a distributed manner. The solution proves the existence of generalized Nash equilibrium of the problem and utilizes Gauss seidel iteration method to provide a solution.
And two,: in the prior art scheme of Latency-Driven Fog Cooperation Approach in Fog Radio Access Networks. The task user firstly sends task data to an associated main F-AP node, the main F-AP node coordinates with other auxiliary F-AP nodes in the coverage range of the main F-AP node, whether calculation unloading is carried out on the auxiliary F-AP or not is selected, sub-tasks of how large are divided for the auxiliary F-AP, and after the tasks on all the auxiliary F-AP nodes are completed, each main F-AP node collects and unifies the tasks and returns a result to the target user. The scheme provides an algorithm with dynamic time delay planning based on the idea of a form filling method, and the low time delay request of a user is met through the cooperative unloading of F-AP under the F-RAN system.
The prior relevant literature available for reference also includes:
[1]T.Chiu,A.Pang,W.Chung,and J.Zhang,“Latency-Driven Fog
Cooperation Approach in Fog Radio Access Networks,”IEEE Trans.Serv.
Comput.,vol.12,no.5,pp.698–711,2019.
[2]Y.Liu,“Exploiting NOMA for Cooperative Edge Computing,”IEEEWirel.Commun.,vol.26,no.5,pp.99–103,2019.
[3]J.Du et al.,“When Mobile-Edge Computing(MEC)Meets Non-orthogonal Multiple Access(NOMA)for the Internet of Things(IoT):System Design and Optimization,”IEEE I。
disclosure of Invention
In order to make up the blank and the deficiency of the prior art, the invention provides a method for collaborative computing and unloading in an F-RAN architecture, aims to fully utilize idle computing resources in an F-RAN edge network, provides an unloading scheme based on NOMA, enables computing tasks to be completed in time within tolerable time delay, and simultaneously considers minimizing task unloading energy consumption, provides an unloading method combining SCA, an interior point method and alliance gaming, enriches application scenes of the F-RAN collaborative caching system, and provides an alternative scheme for processing low-time delay computing tasks.
The F-RAN unloading scheme based on NOMA and the unloading method based on SCA, interior point method and alliance game are provided to efficiently utilize the computing resources of the edge nodes in the network. The offloading scheme is that a task user can offload a computing task to a main F-AP associated with the task user and an idle user with idle computing resources based on NOMA, and the main F-AP further offload the computing task to other auxiliary F-APs based on a cooperative communication function among the F-APs. Meanwhile, under the condition of considering the tolerance time delay of the user, a layered iterative algorithm is provided, wherein the inner layer of the algorithm is combined by the SCA and the interior point method to obtain an unloading scheme after the user association is determined, and the outer layer of the algorithm is user association optimization based on the alliance game theory, so that the total energy consumption of the system is minimized, and compared with the unloading scheme and algorithm commonly used in the prior art, the performance of the system is obviously improved.
Compared with the scheme, according to the difference in different calculation task sizes of users, different calculation capacity sizes of F-AP nodes and transmission delay, one or more F-AP nodes are selected to cooperate to complete user tasks, and although the characteristic of cooperative communication of F-RAN is effectively utilized, only the calculation resources of F-AP are utilized, the calculation capacity of user equipment is ignored, and a great improvement space exists. With the increasing number of mobile terminals, data traffic presents an exponential growth trend, various applications such as augmented reality, face recognition, ultra-high definition video and the like are getting hot, and various computing tasks are difficult to process in time within tolerable delay. The scheme creatively provides a brand new F-RAN cooperative unloading scheme based on NOMA and an unloading method based on SCA, interior point method and alliance game, and jointly considers the available computing resources of a user layer and the cooperative unloading capacity of a fog layer, and analyzes and determines the optimal associated node, task unloading proportion and emission power of a task user in detail, so that the unloading energy consumption of a system is minimized. The scheme for realizing the cooperative computing and unloading in the F-RAN architecture can be used as an alternative scheme for the cooperative computing and unloading of the F-RAN.
The invention adopts the following technical scheme:
a method of cooperative computing offload in an F-RAN architecture, characterized by: the task user unloads the calculation task to the idle user and the main F-AP associated with the task user in a certain proportion through NOMA, the main F-AP further distributes the task to other auxiliary F-APs assisting in unloading in different proportions, and links between the auxiliary F-APs adopt TDMA transmission.
The decentralized task user device may offload computing tasks to other user devices and to multiple cloud nodes in the vicinity, thereby enabling the tasks to be performed in parallel.
Further, consider a quasi-static scenario of multiple terminal devices and multiple F-APs, assuming that user location and communication conditions are fixed; the end device user is divided into TUE and IUE according to whether there is a calculation task at this time: the TUE indicates that the UE currently has a computing task to process, and the IUE indicates that the UE currently has no computing task and has idle computing resources to help other TUEs to perform computing offloading; the TUE has different calculation tasks with different sizes due to different demands, and the F-AP and the IUE have different calculation capacities due to different equipped CPUs;
representing TUE sets asThe IUE set is denoted +.>The F-AP set is denoted +.>And assuming that the task possessed by the TUE is partitionable; TUE n offloads part of the task to the adjacent IUEm and a main +.>The main F-AP further divides the divided tasks, uninstalls the divided tasks to nearby auxiliary F-APs, and returns the calculation result to the main (U) after IUEm and F-APs finish the calculation task>Sorting, and returning the total calculation result to TUEn; thus, task n, i.e. TUE n, can be performed in TUEn, IUEm, mainParallel processing is carried out on other auxiliary F-APs so as to improve the processing speed of the task;
introduce array { D n ,C n ,t n Used to describe tasks n, D n Representing the input data size of task n, C n Representing the computation density of task n, i.e., the CPU cycles required to process 1bit of input data for task n, i.e., the cost D required to complete task n's computation n C n CPU cycles t n Representing the tolerant time delay of the task n;
introducing array a n ={a nn ,a nm ,a nf ∈[0,1]' to represent the division of task n, where a nn ,a nm ,a nf The task ratio calculated locally at TUEn, the task ratio of task n to IUEm and the task ratio of task n to F-APf are respectively expressed, so that the task ratio is satisfied
Further, the communication model is designed as follows: task n is split into a plurality of subtasks and is respectively unloaded to IUEm and the main task according to the split proportionAnd other secondary F-APs; after the calculation is completed, the auxiliary F-AP and the IUE return calculation results to the main F-AP for finishing;
link 1 represents the TUE to IUE communication Link, link 2 represents the TUE to primary F-AP communication Link, and Link 3 represents the primary F-AP to secondary F-AP communication Link; wherein Link 1 and 2 are transmitted using a NOMA scheme; assuming that each F-AP and TUE occupies orthogonal radio channels, the allocation of the channels is predetermined; the wireless channels of the TUE are multiplexed by the cooperatively calculated IUE and the main F-AP through NOMA technology, and the wireless channels distributed by the F-AP are used for unloading transmission among the F-APs; in the TUE transmission phase, i.e. NOMA transmission procedure, TUEn to IUEm and masterThe transmission time of (a) is respectively:
(2)
wherein the method comprises the steps ofRepresenting from TUE n to IUEm and main +.>Data rate, θ n P n And (1-theta) n )P n Indicating TUEn allocation to IUEm and main +.>Is set to the transmission power of (a);in this transmission phase, the transmission power consumption of TUE n is:
at the main partThe transmission stage to the auxiliary F-AP, namely the TDMA transmission process, is the same as the prior art scheme I mentioned in the background art, and the invention assumes that the auxiliary F-AP starts to perform calculation tasks after receiving all tasks; main->The transmission time to the secondary F-APf is:
wherein the method comprises the steps ofRepresenting the slave +.>Data rate to secondary F-APf; in this transmission phase, mainly->The transmission power consumption of (a) is:
further, the calculation model is designed as follows: each task may be divided into multiple sub-tasks for parallel processing on TUE local, IUE, and F-AP; the local calculation time of the task n and the calculation time in IUEm are as follows:
wherein delta n ,δ m The computing power of TUEn and IUEm, respectively, i.e., CPU clock frequency; the power consumption spent on the calculation of each CPU cycle is kdelta 2 Where κ represents a calculation constant factor, typically related to the chip architecture of the CPU, and δ is the calculation power; thus, task n consumes the energy of local and IUEm calculations:
unlike the computation on the user equipment, there is competition for the computing power of the F-AP among the subtasks for a plurality of subtasks offloaded to the same F-AP; assuming that the load balance is realized by the offloading task on each F-AP, the F-AP distributes own idle computing resources proportionally according to the computing task size of each subtask, so that each subtask can obtain the same computing time delay; the time consumed by the F-APf calculation task n is:
wherein delta f Computing power for F-APf; the energy consumed by the F-APf calculation task n is as follows:
thus, for a single computing task n: (1) The time spent executing task n on TUE n isTUEn energy consumption includes calculating the energy consumed by the subtasks and performing NOMA power transfer energy consumption, i.e. +.>(2) The time consumed for executing task n on IUEm is +.>Energy consumption is->(3) Main +.>The time delay model is the same as IUE, and the consumed time is +.>Since it also needs to send subtasks to other auxiliary F-APs, the energy consumption is +.>(4) For the auxiliary F-APf, the time delay also comprises the transmission time from the main F-AP to the auxiliary F-AP, and the consumed time is +.>The energy consumption only comprises the calculation energy consumption:
further, under the time delay constraint, F-APs participating in cooperative unloading are selected by determining an associated main F-AP of the TUE, and the unloading task proportion and NOMA power are reasonably distributed to jointly optimize the energy consumption of the whole system; the optimization problem is expressed as:
the optimization target (12 a) is the sum of power consumption consumed by N computing tasks in the whole scene to complete tasks at TUE, IUE, associated main F-AP and other auxiliary F-APs; constraint (12 b) ensures that the result of the computational task is complete; constraint (12 c) ensures the rationality of the task segmentation scale values; constraint (12 d) indicates that the secondary F-APf participating in collaborative offloading should be at the primary user-associatedIs within the communication coverage of (a); constraint (12 e) ensures that tasks can be completed within a tolerable delay; constraint (12 f) ensures NOMA power allocation factor θ n Within a reasonable range; constraint(12g) And (12 h) indicates that the TUE has and has only one associated primary F-AP.
Further, the method comprises the steps of,the NOMA power distribution part of the TUE is optimized with respect to a non-convex nonlinear function of the power distribution value theta so as to change the non-convexity of the optimization problem; the power allocation optimization problem for NOMA reduces to the transmission power consumption minimization problem for a single TUEn:
s.t.0≤p≤P n (12k)
where p=θ n P n ,Monotonicity analysis is performed on the problem to obtain an optimal power distribution value:
wherein the method comprises the steps ofAnd a nm Needs to meet->
Will be theta * Substitution in (12 a) failing to changeSince the transmission energy consumption of TUE only occupies a very small part of the total energy consumption, so to simplify the solution problem, the energy consumption caused by the transmission part of TUE is ignored, and at this time, under the assumption of known user association, the optimization problem can be simplified as follows:
s.t.(12b)-(12e),(12g)(13c)
will be theta * Is substituted into the problem (13), constraint T nm ≤t n Constant is true, butIs still non-convex, resulting in T nf ≤t n Not a convex constraint; by combining iterative algorithms of successive convex approximation and interior point method, sub-optimal solutions of the above problems are obtained:
constraint T by SCA method nf ≤t n Non-convex items in (a)Converting into a proper convex approximation function; the current optimal solution +.>According to the SCA principle, the i+1th iteration +.>The convex approximation upper bound of (2) is:
wherein,thus, by T nf Middle non-convex item->To obtain a convex constraint +.>The optimization problem (13) becomes a convex optimization problem, and the constraints (13 b) and (13 c) are hidden in the optimization target in a logarithmic form by using an interior point method with logarithmic barriers, and the optimization target is rewritten as follows:
the unloading optimization steps of the iterative algorithm combining the continuous convex approximation and the interior point method are as follows:
step one: initializing task partition vectorsPenalty coefficient ζ (0) And decrementing the coefficient μ, let k=1, to define a sufficiently small positive real number ε;
step two: calling equation (14) atThe place will be->Approximating a convex function, updating a penalty function; solving extreme points of penalty function>
Step three: when (when)When established, iterating the fourth step;
Step four: let xi (k+1) =μξ (k) K=k+1, call equation (14) atThe place will be->Approximating a convex function, updating a penalty function, solving extreme points of the penalty function>
Step five: when the condition of the third step is not met, the iteration is ended, and the current unloading decision is output
Further, the alliance gaming algorithm is utilized to find the best user association pairing:
defining a ternary combinationWherein->For TUE set, ++>For the association pair set of F-AP, +.>For the association case->Total system energy consumption; given an associative pair case->If the association of user n is exchanged from F-APm to F-APm '(m.noteq.m'), a new association pairing is formed +.>Satisfy->The exchange is established; wherein the switching operation is expressed as
Further, in the unloading optimization cycle, enumerating all possible user association situations, and acquiring the lowest system energy consumption which can be obtained under the user association by executing an interior point method based on SCA, if the new user association can obtain lower system energy consumption than the original user association, updating the current user association situation; at the end of each iteration, if a better user association condition cannot be found, selecting to jump out of the loop and outputting a result.
Further, when the SCA-based interior point method computing system is called, the initialization stage only changes the task segmentation and power allocation vectors of the TUE, and the unloading vectors of other TUEs are consistent with the optimal unloading condition under the original user association.
Compared with the prior art, the invention and the technical and improvement provided by the preferred scheme mainly comprise:
1. NOMA-based F-RAN cooperative offloading scheme. The task user unloads the calculation task to the idle user and the main F-AP associated with the task user in a certain proportion through NOMA, the main F-AP further distributes the task to other auxiliary F-APs assisting in unloading in different proportions, and links between the auxiliary F-APs adopt TDMA transmission. In the scheme, the scattered task user equipment can offload the calculation task to other adjacent user equipment and a plurality of fog nodes, so that the task is executed in parallel, and the offloading time delay is reduced.
2. And unloading methods based on SCA, interior point method and alliance game. Because of the non-convexity of the problem of minimizing the unloading energy consumption of the system, decoupling the system, obtaining the optimal NOMA power distribution value through monotonicity analysis, and providing an interior point method based on SCA to solve the task segmentation problem, and providing a hierarchical iterative user association algorithm based on the algorithm by combining with the alliance game theory.
3. The provided experiment is verified according to preset conditions, and the provided unloading scheme is compared with the NOMA unloading scheme, the D2D unloading scheme and the collaborative F-AP unloading scheme, so that the system energy consumption can be reduced better on the premise that the provided unloading scheme can meet the task processing within the tolerance time delay. Meanwhile, the provided unloading method is compared with average unloading, selfish task unloading and single F-AP unloading, and the feasibility of reducing the energy consumption of the system is proved.
One of the key points in the proposal is to provide an unloading scheme which fully utilizes idle computing resources in an edge network, and on the premise of considering the characteristic of the divisible task, the parallel processing capability of user tasks is improved by utilizing NOMA communication transmission and the collaborative transmission capability of F-AP. Can be used as an alternative offloading scheme in the F-RAN edge network cooperative offloading scene.
The second key point in the proposal is to provide a layered collaborative unloading optimization method, firstly, to provide an optimization method combining SCA and interior point method for non-convex problem, so as to solve task segmentation and unloading node selection problems, and to solve user association by utilizing a alliance game method, thereby meeting the tolerable delay requirement of users and reducing the energy consumption of the system.
Compared with the prior art, the method fully considers the available computing resources of various devices on the basis of the prior art, and simultaneously considers task users, idle users and F-AP in computing offloading, and provides a brand-new offloading scheme capable of improving the computing resource utilization rate in the edge network. In addition, the unloading energy consumption of the whole system is considered, and the combination of SCA, interior point method and alliance game is applied in the unloading optimization method, so that the problem of complex non-convex mixed integer nonlinear programming is solved. Finally, performance verification and comparison are carried out on the unloading scheme and the unloading method provided by the invention and the common unloading scheme and method, and the invention is proved to be an alternative technical scheme of edge cooperative unloading in the F-RAN.
Drawings
Fig. 1 is a schematic diagram of a NOMA-based F-RAN cooperative offloading scheme of the present invention;
FIG. 2 is a schematic flow diagram of an offloading method of the present invention based on SCA, interior point methods, and alliance gaming;
fig. 3 is a schematic diagram comparing total energy consumption and task user transmission energy consumption of the offloading method and average offloading, selfish task offloading and single F-AP offloading according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a comparison of an offloading scheme and a NOMA offloading scheme, a D2D offloading scheme, and a cooperative F-AP offloading scheme according to an embodiment of the present invention in terms of offloading delay;
fig. 5 is a schematic diagram of a comparison of unloading schemes and NOMA unloading schemes, D2D unloading schemes and collaborative F-AP unloading schemes according to an embodiment of the present invention in terms of unloading energy consumption.
Detailed Description
In order to make the features and advantages of the present patent more comprehensible, embodiments accompanied with figures are described in detail below: 1. NOMA-based F-RAN cooperative offloading scheme
Consider a quasi-static scenario of multiple terminal devices and multiple F-APs assuming that the user location and communication conditions are fixed. As shown in fig. 1, the end device user is divided into TUE and IUE according to whether there is a calculation task at this time: the TUE indicates that the UE currently has computational tasks to process, the IUE indicates that the UE currently has no computational tasks and has idle computational resources to assist other TUEs in computational offloading. The TUE has different calculation tasks with different sizes due to different requirements, and the F-AP and the IUE have different calculation capacities due to different equipped CPUs.
Representing TUE sets asThe IUE set is denoted +.>The F-AP set is denoted +.>And the task possessed by the TUE is assumed to be separable. In FIG. 1, TUE n uses NOMA to offload part of the task to adjacent IUE m and a main +.>The main F-AP further divides the divided tasks, unloads the divided tasks to nearby auxiliary F-APs, and returns the calculation result to the main after IUEm and F-APs finish the calculation taskAnd (5) finishing, and returning the total calculation result to TUE n. Thus, task n (for convenience of explanation, hereinafter, task of TUE n is denoted as task n) can be described in TUEn, IUEm, mainly +.>And other auxiliary F-APs to improve the processing speed of the task.
Introduce array { D n ,C n ,t n Used to describe tasks n, D n Representing the input data size of task n, C n Representing the computation density of task n, i.e., the CPU cycles required to process 1bit of input data for task n, i.e., the cost D required to complete task n's computation n C n CPU cycles t n Representing the tolerant delay of task n. In addition, since the present embodiment considers the partitionable task, an array a is introduced n ={a nn ,a nm ,a nf ∈[0,1]' to represent the division of task n, where a nn ,a nm ,a nf The task ratio calculated locally at TUEn, the task ratio of task n to IUEm and the task ratio of task n to F-APf are respectively expressed, so that the task ratio is satisfied
(1) Communication model: tasksn is divided into a plurality of subtasks and is respectively unloaded to IUEm and a main machine according to the dividing proportionAnd other secondary F-APs. After the calculation is completed, the auxiliary F-AP and the IUE return calculation results to the main F-AP for finishing. The present embodiment assumes that the overhead incurred by the result return process is negligible.
As shown in fig. 1, there are three types of communication links in this scenario, link 1 represents the TUE to IUE communication Link, link 2 represents the TUE to primary F-AP communication Link, and Link 3 represents the primary F-AP to secondary F-AP communication Link. Link 1 and 2 are transmitted using NOMA scheme. Assuming that each F-AP and TUE occupies orthogonal radio channels, the allocation of channels is predetermined. The wireless channels of the TUE are multiplexed by the cooperatively calculated IUE and the primary F-AP by NOMA technology, and the wireless channels allocated by the F-AP are used for offload transmissions between the F-APs.
Specifically, during the transmission phase of the TUE, i.e., NOMA transmission process, TUE n through IUEm and the masterThe transmission time of (a) is respectively:
wherein the method comprises the steps ofRepresenting from TUE n to IUEm and main +.>Data rate, θ n P n And (1-theta) n )P n Indicating TUEn allocation to IUEm and main +.>Is provided. In the transmission phase, the TUE n has a transmission power consumption of
At the main partThe transmission phase to the secondary F-AP, i.e. the TDMA transmission process, is the same as the prior art scheme mentioned in the background, the present invention assumes that the secondary F-AP starts to perform the calculation tasks after receiving all tasks. Main->The transmission time to the auxiliary F-APf is +.>
Wherein the method comprises the steps ofRepresenting the slave +.>Data rate to the secondary F-APf. In this transmission phase, mainly->Is to transmit power consumption of
(2) And (3) calculating a model: what is considered in this study is a partitionable task, i.e., each task can be divided into multiple subtasks for parallel processing on TUE local, IUE, and F-AP. The local calculation time of task n and the calculation time in IUEm are
Wherein delta n ,δ m The computing power of TUEn and IUEm, respectively, i.e. CPU clock frequency (CPU cycles/sec). The power consumption spent on the calculation of each CPU cycle is kdelta 2 Where κ represents a computation constant factor, typically related to the chip architecture of the CPU, and δ is the computation power (CPU cycles/second). Thus, task n consumes the energy of local and IUEm calculations as
Unlike the computation on the user equipment, there is competition for the computing power of the F-AP among the subtasks for a plurality of subtasks offloaded to the same F-AP. Assuming that the load balance is realized by the offloading task on each F-AP, the F-AP distributes own idle computing resources proportionally according to the computing task size of each subtask, so that each subtask can obtain the same computing time delay. The time consumed by the F-APf calculation task n is
Wherein delta f The computing power of F-APf. The energy consumed by the F-APf calculation task n is
Thus, for a single computing task n: (1) The time spent executing task n on TUE n isTUEn energy consumption includes calculating the energy consumed by the subtasks and performing NOMA power transfer energy consumption, i.e. +.>(2) The time consumed for executing task n on IUE m is +.>Energy consumption is->(3) For TUE n's mainThe time delay model is the same as IUE, and the consumed time is +.>Since it also needs to send subtasks to other auxiliary F-APs, the energy consumption is +.>(4) For the auxiliary F-APf, the time delay also comprises the transmission time from the main F-AP to the auxiliary F-AP, and the consumed time is +.>The energy consumption only comprises the calculation energy consumption:2. unloading method based on SCA, interior point method and alliance game
In this embodiment, the study is focused on the energy consumption of the whole system by determining the associated main F-AP of the TUE, selecting the F-APs involved in the cooperative offloading, and reasonably distributing the offloading task proportion and NOMA power to jointly optimize the energy consumption of the whole system under the time delay constraint. The optimization problem can be expressed as:
wherein the optimization objective (12 a) is the sum of the power consumption consumed by N computing tasks in the overall scenario to complete the tasks at the TUE, IUE, associated primary F-APs and other secondary F-APs. Constraint (12 b) ensures that the result of the computational task is complete. Constraints (12 c) ensure the rationality of the task segmentation scale values. Constraint (12 d) indicates that the secondary F-APf participating in collaborative offloading should be at the primary user-associatedIs in communication coverage of (a)Inside. Constraint (12 e) ensures that tasks can be completed within a tolerable delay. Constraint (12 f) ensures NOMA power allocation factor θ n Within a reasonable range. Constraints (12 g) and (12 h) indicate that the TUE has and has only one associated primary F-AP.
(1) Interior point method unloading optimization based on SCA
Due to E n trans Regarding the non-convex nonlinear function of the power allocation value θ, an attempt is first made to optimize the NOMA power allocation portion of TUE to change the non-convexity of the optimization problem. The power allocation optimization problem for NOMA can be reduced to the transmission power consumption minimization problem for a single TUEn:
s.t.0≤p≤P n (12j)
where p=θ n P n ,Monotonicity analysis is performed on the problem, and the optimal power distribution value is obtained:
wherein the method comprises the steps ofAnd a nm Needs to meet->
Will be theta * Substitution in (12 a) failing to changeSince the transmission energy consumption of TUE occupies only a very small part of the total energy consumption, in order to simplify the solution problem, the embodiment ignores the energy consumption caused by the transmission part of TUE, and at this time, under the assumption of known user association, the optimization problem can be simplified into:
s.t.(12b)-(12e),(12g)(13c)
will be theta * The value of (2) is substituted into the problem (13) and the constraint T can be found nm ≤t n Constant is true, butIs still non-convex, resulting in T nf ≤t n Not a convex constraint. To solve this problem, the present embodiment proposes an iterative algorithm in combination with the continuous convex approximation and the interior point method to obtain a suboptimal solution to the above problem.
Constraint T by SCA method nf ≤t n Non-convex items in (a)Into a suitable convex approximation function. The current optimal solution +.>According to the SCA principle, the i+1th iteration +.>The convex approximation upper bound of (2) is: />
Wherein,thus, by T nf Middle non-convex item->Can obtain a convex constraint +.>The optimization problem (13) becomes a convex optimization problem, and the embodiment uses an interior point method with logarithmic barriers to make the constraints (13 b) and (13 c) hidden in the logarithmic form in the optimization target, and rewrites the optimization target into:
the unloading optimization method based on the SCA interior point method comprises the following steps:
step one: initializing task partition vectorsPenalty coefficient ζ (0) And decrementing the coefficient μ, let k=1, to define a sufficiently small positive real number ε.
Step two: calling equation (14) atThe place will be->The penalty function is updated approximately as a convex function. Solving extreme points of penalty function>
Step three: when (when)And when the method is established, performing iteration of the step four.
Step four: let xi (k+1) =μξ (k) K=k+1, call equation (14) atThe place will be->Approximating a convex function, updating a penalty function, solving extreme points of the penalty function>
Step five: when the condition of the third step is not met, the iteration is ended, and the current unloading decision is output
(2) User association method based on alliance game
To find the best user-associated pairing, a coalition gaming algorithm is utilized. The coalition gaming algorithm used in this embodiment consists of a combination of three elementsIs defined, wherein->For TUE set, ++>For the association pair set of F-AP, +.>For the association case->The total energy consumption of the system. Given an associative pair case->If the association of user n is exchanged from F-APm to F-APm '(m.noteq.m'), a new association pairing is formed +.>Satisfy->The exchange is established. Wherein the switching operation can be expressed as +.>
The combined use of the alliance gaming theory and the SCA-based interior point method is clearly illustrated in fig. 2, in which all possible user association situations are listed in a loop, and the lowest system energy consumption obtainable under the user association is obtained by executing the SCA-based interior point method, and if a new user association can obtain lower system energy consumption than the original user association, the current user association situation is updated. At the end of each iteration, if a better user association condition cannot be found, selecting to jump out of the loop and outputting a result.
Notably, the invocation of the SCA-based interior point method computing system can take time, and the initialization stage only changes the task segmentation and power allocation vectors of TUEn, and the unloading vectors of other TUEs are consistent with the optimal unloading condition under the original user association. The reason is that the choice of the initial point, if closer to the solution, will effectively reduce the number of iterations.
3. Experimental results and analysis of collaborative offloading schemes and methods
To verify the effectiveness of this embodiment, the proposed offloading scheme and NOMA offloading scheme, D2D offloading scheme and collaborative F-AP offloading scheme are compared with the same user profile, while the proposed offloading method is compared with average offloading, selfish task offloading and single F-AP offloading.
Fig. 3 shows the overall power consumption of a system employing different offloading algorithms and the variation in TUE transmission power consumption as the number of TUEs increases. From this figure, it can be concluded that the transmission energy consumption of the TUE accounts for only a very small part of the total energy consumption of the system, and the average ratio is 0.34%, so that the influence caused by the transmission energy consumption of the TUE is almost negligible. Compared with other algorithms, the algorithm provided by the embodiment can obtain lower system power consumption, and the gain of the method of the embodiment is larger and larger along with the increase of task users.
Fig. 4 and 5 illustrate the difference in latency and overall system power consumption of a single TUE between the proposed solution of the present embodiment and the current common solutions in other computational offload studies. As can be seen from fig. 4, the embodiment of the present invention can satisfy the tolerance delay of the users, and in addition, although other algorithms can achieve less delay on some users, the tolerance delay upper limit is exceeded on other users. As can be seen from fig. 5, the D2D offloading scheme can achieve lower system power consumption than the present embodiment scheme (leftmost), but from fig. 4 it is found that this is the performance achieved at the expense of user latency. In summary, compared with the solutions adopted in other current researches, the offloading solution provided in this embodiment can better obtain lower system power consumption on the premise of meeting the requirement of the user for time delay tolerance.
The present invention is not limited to the above-mentioned preferred embodiments, and any person who can obtain other methods of collaborative computing offloading in the F-RAN architecture under the teaching of the present invention shall fall within the scope of the present invention.
Claims (7)
1. A method of cooperative computing offload in an F-RAN architecture, characterized by: the task user unloads the calculation task to the idle user and the main F-AP associated with the task user in a certain proportion through NOMA, the main F-AP further distributes the task to other auxiliary F-APs assisting in unloading in different proportions, and links between the auxiliary F-APs adopt TDMA transmission;
the distributed task user equipment can offload computing tasks to other adjacent user equipment and a plurality of fog nodes, so that the tasks are executed in parallel;
considering quasi-static scenarios of multiple terminal devices and multiple F-APs, assuming that user location and communication conditions are fixed; the end device user is divided into TUE and IUE according to whether there is a calculation task at this time: the TUE indicates that the UE currently has a computing task to process, and the IUE indicates that the UE currently has no computing task and has idle computing resources to help other TUEs to perform computing offloading; the TUE has different calculation tasks with different sizes due to different demands, and the F-AP and the IUE have different calculation capacities due to different equipped CPUs;
representing TUE sets asThe IUE set is denoted +.>The set of F-APs is denoted asAnd assuming that the task possessed by the TUE is partitionable; TUE n offloads part of the task to the adjacent IUE m and a main +.>Main->Dividing the divided tasks further, unloading to nearby auxiliary F-APs, returning the calculation result to the main after IUE m and F-APs finish the calculation task>Sorting is carried out, and then the total calculation result is returned to TUE n; thus, task n, i.e. TUE n, can be performed in TUE n, IUE m, mainParallel processing is carried out on other auxiliary F-APs so as to improve the processing speed of the task;
introduce array { D n ,C n ,t n Used to describe tasks n, D n Representing the input data size of task n, C n Representing the computation density of task n, i.e., the CPU cycles required to process 1bit of input data for task n, i.e., the cost D required to complete task n's computation n C n CPU cycles t n Representing the tolerant time delay of the task n;
introducing array a n ={a nn ,a nm ,a nf ∈[0,1]' to represent the division of task n, where a nn ,a nm ,a nf Respectively representing the task proportion calculated locally at TUE n, the task proportion from task n to IUE m and the task proportion from task n to F-APf, meeting
The communication model was designed as follows: task n is split into a plurality of subtasks and is respectively unloaded to IUE m and main task according to the split proportionAnd other secondary F-APs; after the calculation is completed, the auxiliary F-AP and IUE return the calculation result to the main +.>Finishing;
link 1 represents the TUE to IUE communication Link, link 2 represents the TUE to primary F-AP communication Link, and Link 3 represents the primary F-AP to secondary F-AP communication Link; wherein Link 1 and 2 are transmitted using a NOMA scheme; assuming that each F-AP and TUE occupies orthogonal radio channels, the allocation of the channels is predetermined; the wireless channels of the TUE are multiplexed by the cooperatively calculated IUE and the main F-AP through NOMA technology, and the wireless channels distributed by the F-AP are used for unloading transmission among the F-APs;
in the TUE transmission phase, i.e. NOMA transmission procedure, TUE n to IUE m and mainThe transmission time of (a) is respectively:
wherein the method comprises the steps ofRepresenting from TUE n to IUE m and main +.>Data rate, θ n P n And (1-theta) n )P n Indicating TUE n allocated to IUE m and main +.>Is set to the transmission power of (a); in this transmission phase, the transmission power consumption of TUE n is:
at the main partThe transmission stage to the auxiliary F-AP, namely the TDMA transmission process, assumes that the auxiliary F-AP starts to perform calculation tasks after receiving all tasks; main->Transmission time to secondary F-APfThe method comprises the following steps:
wherein the method comprises the steps ofRepresenting the slave +.>Data rate to secondary F-APf; in this transmission phase, mainly->The transmission power consumption of (a) is:
2. the method of collaborative computing offloading in an F-RAN architecture according to claim 1, wherein:
the design calculation model is as follows: each task may be divided into multiple sub-tasks for parallel processing on TUE local, IUE, and F-AP; the local calculation time of the task n and the calculation time in IUE m are:
wherein delta n ,δ m The computing power of TUE n and IUE m, respectively, i.e., CPU clock frequency; the power consumption spent on the calculation of each CPU cycle is kdelta 2 Where κ represents a calculated constant factor, typically in conjunction with the chip of the CPUStructurally related, δ is the computational power; thus, task n calculates the consumed energy locally and IUE m as:
unlike the computation on the user equipment, there is competition for the computing power of the F-AP among the subtasks for a plurality of subtasks offloaded to the same F-AP; assuming that the load balance is realized by the offloading task on each F-AP, the F-AP distributes own idle computing resources proportionally according to the computing task size of each subtask, so that each subtask can obtain the same computing time delay; the time consumed by the F-APf calculation task n is:
wherein delta f Computing power for F-APf; the energy consumed by the F-APf calculation task n is as follows:
thus, for a single computing task n: (1) The time spent executing task n on TUE n isTUE n energy consumption includes calculating the energy consumed by the subtasks and performing NOMA power transfer energy consumption, i.e.)>(2) The time consumed for executing task n on IUE m is +.>Energy consumption is->(3) For TUE n's mainThe time delay model is the same as IUE, and the consumed time is +.>Since it also needs to send subtasks to other auxiliary F-APs, the energy consumption is +.>(4) For the auxiliary F-APf, the time delay also comprises the transmission time from the main F-AP to the auxiliary F-AP, and the consumed time is +.>The energy consumption only comprises the calculation energy consumption:
3. the method of collaborative computing offloading in an F-RAN architecture according to claim 2, wherein:
under the time delay constraint, F-APs participating in cooperative unloading are selected by determining an associated main F-AP of the TUE, and the unloading task proportion and NOMA power are reasonably distributed to jointly optimize the energy consumption of the whole system; the optimization problem is expressed as:
the optimization target formula (12 a) is the sum of power consumption consumed by N computing tasks in the whole scene for completing the tasks at the TUE, the IUE, the associated main F-AP and other auxiliary F-APs; constraint equation (12 b) ensures that the result of the computational task is complete; constraint formula (12 c) ensures the rationality of the task segmentation ratio value; constraint equation (12 d) indicates that the secondary F-APf participating in collaborative offloading should be at the primary of user associationIs within the communication coverage of (a); constraint equation (12 e) ensures that tasks can be completed within a tolerable delay; constraint equation (12 f) ensures NOMA power allocation factor θ n Within a reasonable range; constraint equation (12 g) and equation (12 h) represent TUE has and has only one associated primary F-AP.
4. A method of collaborative computing offloading in an F-RAN architecture according to claim 3, wherein:
the NOMA power distribution part of the TUE is optimized with respect to a non-convex nonlinear function of the power distribution value theta so as to change the non-convexity of the optimization problem; the power allocation optimization problem for NOMA reduces to the transmission power consumption minimization problem for a single TUE n:
s.t.0≤p≤P n (12j)
where p=θ n P n ,Monotonicity analysis is performed on the problem to obtain an optimal power distribution value:
wherein the method comprises the steps ofAnd a nm Needs to meet->
Will be theta * Substitution into equation (12 a) fails to changeSince the transmission energy consumption of TUE only occupies a very small part of the total energy consumption, so to simplify the solution problem, the energy consumption caused by the transmission part of TUE is ignored, and at this time, under the assumption of known user association, the optimization problem can be simplified as follows:
s.t.(12b)-(12e) (13c)
will be theta * Is substituted into the problem (13), constraint T nm ≤t n Constant is true, butIs still non-convex, resulting in T nf ≤t n Not a convex constraint; by combining iterative algorithms of successive convex approximation and interior point method, sub-optimal solutions of the above problems are obtained:
constraint T by SCA method nf ≤t n Non-convex items in (a)Converting into a proper convex approximation function; the current optimal solution +.>According to the SCA principle, the i+1th iteration +.>The convex approximation upper bound of (2) is:
wherein,thus, by T nf Middle non-convex item->To obtain a convex constraintThe optimization problem (13) becomes a convex optimization problem, constraint formulas (13 b) and (13 c) are hidden in a logarithmic form in an optimization target by using an interior point method with logarithmic barriers, and the optimization target is rewritten as follows:
the unloading optimization steps of the iterative algorithm combining the continuous convex approximation and the interior point method are as follows:
step one: initializing task partition vectorsPenalty coefficient ζ (0) And decrementing the coefficient μ, let k=1, to define a sufficiently small positive real number ε;
step two: calling equation (14) atThe place will be->Approximating a convex function, updating a penalty function; solving extreme points of penalty function>
Step three: when (when)When the result is positive, performing iteration of the fourth step;
step four: let xi (k+1) =μξ (k) K=k+1, call equation (14) atThe place will be->Approximating a convex function, updating a penalty function, solving extreme points of the penalty function>
Step five: when the condition of the third step is not met, the iteration is ended, and the current unloading decision is output
5. The method of collaborative computing offloading in an F-RAN architecture according to claim 4, wherein: searching for the optimal user association pairing by utilizing a alliance game algorithm:
defining a ternary combinationWherein->For TUE set, ++>For the association pair set of F-AP, +.>For the association case->Total system energy consumption; given an associative pair case->If the association of user n is exchanged from F-APm to F-APm '(m.noteq.m'), a new association pairing is formed +.>Satisfy->The exchange is established; wherein the switching operation is expressed as->
6. The method of collaborative computing offloading in an F-RAN architecture according to claim 5, wherein:
in the unloading optimization cycle, enumerating all possible user association situations, acquiring the lowest system energy consumption which can be obtained under the user association by executing an interior point method based on SCA, and updating the current user association situation if the new user association can obtain lower system energy consumption than the original user association; at the end of each iteration, if a better user association condition cannot be found, selecting to jump out of the loop and outputting a result.
7. The method of collaborative computing offloading in an F-RAN architecture according to claim 6, wherein: when the SCA-based interior point method computing system is called, the initialization stage of the system can only change the task segmentation and power allocation vectors of the TUE, and the unloading vectors of other TUEs are consistent with the optimal unloading condition under the original user association.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111531158.8A CN114189521B (en) | 2021-12-15 | 2021-12-15 | Method for collaborative computing offloading in F-RAN architecture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111531158.8A CN114189521B (en) | 2021-12-15 | 2021-12-15 | Method for collaborative computing offloading in F-RAN architecture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114189521A CN114189521A (en) | 2022-03-15 |
CN114189521B true CN114189521B (en) | 2024-01-26 |
Family
ID=80543863
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111531158.8A Active CN114189521B (en) | 2021-12-15 | 2021-12-15 | Method for collaborative computing offloading in F-RAN architecture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114189521B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114973673B (en) * | 2022-05-24 | 2023-07-18 | 华南理工大学 | A task offloading method combining NOMA and content caching in vehicle-road coordination system |
CN116016502A (en) * | 2022-12-01 | 2023-04-25 | 厦门大学 | A Joint Allocation Method of Communication and Computing Resources for Minimizing Offload Cost of VR System |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109413724A (en) * | 2018-10-11 | 2019-03-01 | 重庆邮电大学 | A kind of task unloading and Resource Allocation Formula based on MEC |
CN109684075A (en) * | 2018-11-28 | 2019-04-26 | 深圳供电局有限公司 | Method for unloading computing tasks based on edge computing and cloud computing cooperation |
CN110392079A (en) * | 2018-04-20 | 2019-10-29 | 上海无线通信研究中心 | Fog Computing Oriented Node Computing Task Scheduling Method and Device |
CN110543336A (en) * | 2019-08-30 | 2019-12-06 | 北京邮电大学 | Method and device for edge computing task offloading based on non-orthogonal multiple access technology |
CN110719641A (en) * | 2019-10-15 | 2020-01-21 | 南京邮电大学 | A joint optimization method for user offloading and resource allocation in edge computing |
EP3605329A1 (en) * | 2018-07-31 | 2020-02-05 | Commissariat à l'énergie atomique et aux énergies alternatives | Connected cache empowered edge cloud computing offloading |
CN111263401A (en) * | 2020-01-15 | 2020-06-09 | 天津大学 | Multi-user cooperative computing unloading method based on mobile edge computing |
CN111641973A (en) * | 2020-05-29 | 2020-09-08 | 重庆邮电大学 | Load balancing method based on fog node cooperation in fog computing network |
CN111800812A (en) * | 2019-10-10 | 2020-10-20 | 华北电力大学 | Mobile edge computing user access scheme based on non-orthogonal multiple access |
CN111818168A (en) * | 2020-06-19 | 2020-10-23 | 重庆邮电大学 | An adaptive joint computing offloading and resource allocation method in the Internet of Vehicles |
WO2020216135A1 (en) * | 2019-04-25 | 2020-10-29 | 南京邮电大学 | Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration |
CN111901374A (en) * | 2020-06-19 | 2020-11-06 | 东南大学 | Task unloading method based on alliance game in power Internet of things system |
CN112512063A (en) * | 2020-11-25 | 2021-03-16 | 福州大学 | Resource allocation method for unmanned aerial vehicle assisted edge computing based on radio frequency energy collection |
CN112888002A (en) * | 2021-01-26 | 2021-06-01 | 重庆邮电大学 | Game theory-based mobile edge computing task unloading and resource allocation method |
CN113726858A (en) * | 2021-08-12 | 2021-11-30 | 西安交通大学 | Self-adaptive AR task unloading and resource allocation method based on reinforcement learning |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017099548A1 (en) * | 2015-12-11 | 2017-06-15 | Lg Electronics Inc. | Method and apparatus for indicating an offloading data size and time duration in a wireless communication system |
TWI628969B (en) * | 2017-02-14 | 2018-07-01 | 國立清華大學 | Joint user clustering and power allocation method and base station using the same |
-
2021
- 2021-12-15 CN CN202111531158.8A patent/CN114189521B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110392079A (en) * | 2018-04-20 | 2019-10-29 | 上海无线通信研究中心 | Fog Computing Oriented Node Computing Task Scheduling Method and Device |
EP3605329A1 (en) * | 2018-07-31 | 2020-02-05 | Commissariat à l'énergie atomique et aux énergies alternatives | Connected cache empowered edge cloud computing offloading |
CN109413724A (en) * | 2018-10-11 | 2019-03-01 | 重庆邮电大学 | A kind of task unloading and Resource Allocation Formula based on MEC |
CN109684075A (en) * | 2018-11-28 | 2019-04-26 | 深圳供电局有限公司 | Method for unloading computing tasks based on edge computing and cloud computing cooperation |
WO2020216135A1 (en) * | 2019-04-25 | 2020-10-29 | 南京邮电大学 | Multi-user multi-mec task unloading resource scheduling method based on edge-end collaboration |
CN110543336A (en) * | 2019-08-30 | 2019-12-06 | 北京邮电大学 | Method and device for edge computing task offloading based on non-orthogonal multiple access technology |
CN111800812A (en) * | 2019-10-10 | 2020-10-20 | 华北电力大学 | Mobile edge computing user access scheme based on non-orthogonal multiple access |
CN110719641A (en) * | 2019-10-15 | 2020-01-21 | 南京邮电大学 | A joint optimization method for user offloading and resource allocation in edge computing |
CN111263401A (en) * | 2020-01-15 | 2020-06-09 | 天津大学 | Multi-user cooperative computing unloading method based on mobile edge computing |
CN111641973A (en) * | 2020-05-29 | 2020-09-08 | 重庆邮电大学 | Load balancing method based on fog node cooperation in fog computing network |
CN111818168A (en) * | 2020-06-19 | 2020-10-23 | 重庆邮电大学 | An adaptive joint computing offloading and resource allocation method in the Internet of Vehicles |
CN111901374A (en) * | 2020-06-19 | 2020-11-06 | 东南大学 | Task unloading method based on alliance game in power Internet of things system |
CN112512063A (en) * | 2020-11-25 | 2021-03-16 | 福州大学 | Resource allocation method for unmanned aerial vehicle assisted edge computing based on radio frequency energy collection |
CN112888002A (en) * | 2021-01-26 | 2021-06-01 | 重庆邮电大学 | Game theory-based mobile edge computing task unloading and resource allocation method |
CN113726858A (en) * | 2021-08-12 | 2021-11-30 | 西安交通大学 | Self-adaptive AR task unloading and resource allocation method based on reinforcement learning |
Non-Patent Citations (5)
Title |
---|
Latency-Driven Fog Cooperation Approach in Fog Radio Access Networks;Te-Chuan Chiu等;IEEE;第12卷(第5期);第698-711页 * |
MEC多服务器启发式联合任务卸载和资源分配策略;路亚;;计算机应用与软件(10);第83-90页 * |
POST: Parallel Offloading of Splittable Tasks in Heterogeneous Fog Networks;Zening Liu等;IEEE;第7卷(第4期);第3170-3183页 * |
基于MEC的任务卸载和资源分配联合优化方案;黄晓舸;崔艺凡;张东宇;陈前斌;;系统工程与电子技术(06);第1386-1394页 * |
多缓存容量场景下的D2D内容缓存布设优化方案;龙彦汕;吴丹;蔡跃明;王萌;郭继斌;;计算机应用(05);第237-241、246页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114189521A (en) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112492626B (en) | A method for uninstalling computing tasks for mobile users | |
Zhang et al. | Dynamic task offloading and resource allocation for mobile-edge computing in dense cloud RAN | |
Guo et al. | Mobile-edge computation offloading for ultradense IoT networks | |
Zhou et al. | Computation resource allocation and task assignment optimization in vehicular fog computing: A contract-matching approach | |
CN111447619B (en) | A method for joint task offloading and resource allocation in mobile edge computing networks | |
Zhan et al. | Mobility-aware multi-user offloading optimization for mobile edge computing | |
Wang et al. | HetMEC: Latency-optimal task assignment and resource allocation for heterogeneous multi-layer mobile edge computing | |
Zhang et al. | Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks | |
CN110087318B (en) | Task unloading and resource allocation joint optimization method based on 5G mobile edge calculation | |
Xia et al. | Data, user and power allocations for caching in multi-access edge computing | |
Bozorgchenani et al. | Centralized and distributed architectures for energy and delay efficient fog network-based edge computing services | |
Deng et al. | Throughput maximization for multiedge multiuser edge computing systems | |
Wang et al. | A high reliable computing offloading strategy using deep reinforcement learning for iovs in edge computing | |
CN112015545B (en) | Task unloading method and system in vehicle edge computing network | |
CN114189521B (en) | Method for collaborative computing offloading in F-RAN architecture | |
CN114885418A (en) | Joint optimization method, device and medium for task unloading and resource allocation in 5G ultra-dense network | |
CN109756912A (en) | A multi-user multi-base station joint task offloading and resource allocation method | |
CN113364630A (en) | Quality of service (QoS) differentiation optimization method and device | |
Wei et al. | Optimal offloading in fog computing systems with non-orthogonal multiple access | |
Wu et al. | A mobile edge computing-based applications execution framework for Internet of Vehicles | |
CN114885422A (en) | Dynamic edge computing unloading method based on hybrid access mode in ultra-dense network | |
Mazza et al. | A cluster based computation offloading technique for mobile cloud computing in smart cities | |
Zhang et al. | Partial Computation Offloading in Satellite-Based Three-Tier Cloud-Edge Integration Networks | |
Lin et al. | Joint offloading decision and resource allocation for multiuser NOMA-MEC systems | |
Xia et al. | Location-aware and delay-minimizing task offloading in vehicular edge computing networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |