CN111182570B - User association and edge computing unloading method for improving utility of operator - Google Patents

User association and edge computing unloading method for improving utility of operator Download PDF

Info

Publication number
CN111182570B
CN111182570B CN202010019094.2A CN202010019094A CN111182570B CN 111182570 B CN111182570 B CN 111182570B CN 202010019094 A CN202010019094 A CN 202010019094A CN 111182570 B CN111182570 B CN 111182570B
Authority
CN
China
Prior art keywords
resource allocation
base station
computing
operator
decision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010019094.2A
Other languages
Chinese (zh)
Other versions
CN111182570A (en
Inventor
景文鹏
张慧雯
路兆铭
温向明
张晶壹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202010019094.2A priority Critical patent/CN111182570B/en
Publication of CN111182570A publication Critical patent/CN111182570A/en
Application granted granted Critical
Publication of CN111182570B publication Critical patent/CN111182570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth

Abstract

The embodiment of the disclosure discloses a user association and edge computing unloading method for improving utility of an operator, which comprises the following steps: obtaining mobile node and surrounding multipleCalculating a matching pair set formed by nodes; determining a bandwidth resource allocation strategy b and a computing resource allocation strategy f based on the matching pair set, the bandwidth resources of the computing nodes and the computing resources°Unloading decision lambda, respectively constructing a user preference list and a base station preference list; and when the mobile node establishes communication connection, selecting a computing node from the user preference list to send a task unloading request, and after receiving the task unloading request, if the connection number reaches an upper limit, and the mobile node is matched with the base station preference list, disconnecting the mobile node which is not in the list, and establishing communication connection with the mobile node. The technical scheme not only guarantees the service quality of the mobile node, but also maximizes the potential benefits of operators in the edge computing network.

Description

User association and edge computing unloading method for improving utility of operator
Technical Field
The disclosure relates to the technical field of communication networks, in particular to a user association and edge computing offloading method for improving utility of an operator.
Background
With the popularization of intelligent mobile devices, computation-intensive and delay-sensitive applications such as augmented reality, virtual reality, online games and the like are emerging and are rapidly favored by users. However, the limited computing resources and battery capacity of mobile devices have difficulty meeting the performance requirements of such applications. Meanwhile, the high transmission delay caused by the traditional mobile cloud computing technology is not friendly to the delay sensitive tasks. In order to solve this problem, Mobile Edge Computing (MEC) server has come to work with wireless access points or small base stations to pull cloud Computing resources to the user side, which can reduce transmission delay. User Equipment (UE) can transmit all or part of the computation-intensive tasks to the MEC server for execution by computation offloading, thereby achieving the purpose of alleviating computation and prolonging the battery life of the device. However, the MEC server does not have as rich resources as the cloud server, and how to make effective offloading decisions and resource configuration is crucial.
In order to improve spectral efficiency and quality of service to edge users, network deployment tends to be dense, and the density of MEC servers integrated with access units also increases dramatically. The user selects a proper MEC server to perform calculation unloading, namely user association, which is important for improving the service satisfaction of the user and the benefit of an operator. In conventional communication networks, user association decisions are made based on transmission bandwidth, transmission power and inter-cell interference, which directly affect the communication rate. In the edge computing network, since the limited computing power of the MEC cannot support excessive computing tasks, the user association decision should also take into account the influence of factors such as server computing resources, data volume of the offloaded tasks, and task delay requirements.
Generally, the service requested by the user from the edge server is charged by the operator. From the operator's perspective, higher potential revenue will encourage them to provide better service to the user. However, different tasks have different resource requirements on the edge server, and if the task types are not distinguished, the service satisfaction of the user is greatly reduced by adopting a fixed charging mode. It is therefore necessary to charge different fees for different types of tasks in combination with the actual need.
Most of the current mobile edge computing offloading schemes mostly use the energy consumption or the time delay of the user equipment as a performance optimization index, and neglect the optimization of potential benefits of operators. In addition, most of the existing schemes focus on unloading decision or computing resource management, and less schemes for performing joint optimization on user association are available. In particular, a gap exists in the method of computation offload for task diversity.
Disclosure of Invention
In order to solve the problems in the related art, the embodiments of the present disclosure provide a user association and edge computing offloading method for improving the utility of an operator.
Specifically, the method comprises the following steps:
acquiring a matching pair set formed by a mobile node and a plurality of surrounding computing nodes;
determining a bandwidth resource allocation strategy b, a calculation resource allocation strategy f DEG and an unloading decision lambda based on the matching pair set, the bandwidth resources of the calculation nodes and the calculation resources;
respectively constructing a user preference list and a base station preference list based on a bandwidth resource allocation strategy b, a computing resource allocation strategy f DEG and an unloading decision lambda, wherein the user preference list stores the corresponding relation of the mobile node connected with the computing node, and the base station preference list stores the corresponding relation of the computing node connected with the mobile node;
and when the mobile node establishes communication connection, selecting a computing node from the user preference list to send a task unloading request, and after receiving the task unloading request, if the connection number reaches an upper limit, and the mobile node is matched with the base station preference list, disconnecting the mobile node which is not in the list, and establishing communication connection with the mobile node.
Optionally, the method further comprises: and if the base station preference list is not matched with the mobile node, updating a matching pair set formed by the mobile node and a plurality of surrounding computing nodes, and repeating the steps of constructing the user preference list and the base station preference list to establish communication connection with the mobile node.
Optionally, the determining an offload decision λ based on the set of matching pairs, bandwidth resources of the compute node, and computational resources includes:
constructing an operator revenue model based on the matched pair set, the bandwidth resources of the computing nodes and the computing resources;
determining an offloading decision constraint satisfied by the operator revenue model;
and under the condition of meeting the unloading decision constraint condition, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining an unloading decision lambda according to the optimal solution.
Optionally, the operator revenue model is expressed as:
Figure GDA0003054384980000031
the offload decision constraint is expressed as:
(1)
Figure GDA0003054384980000032
(2)
Figure GDA0003054384980000033
(3)
Figure GDA0003054384980000034
wherein the content of the first and second substances,
Figure GDA0003054384980000035
representing a set of base stations with marginal computing power, M representing base stations,
Figure GDA0003054384980000036
representing a set of user equipments in the network that need to do task offloading, N representing user equipments,
Figure GDA0003054384980000037
represents a set of user equipments associated with base station m;
Figure GDA0003054384980000038
total revenue for the operator;
Nmrepresents the number of user equipments associated with base station m; mu denotes the price charged by the operator per bit of data, lambdan,mRepresenting the duty-off ratio, W, of n users in m base stationsnRepresenting user equipmentn value weight, which can be customized according to the operator's classification of different tasks, DnRepresenting the amount of data of a task to be processed by the user equipment n;
Figure GDA0003054384980000039
representing the cost of the operator, including bandwidth resource cost, computational resource cost and energy resource cost;
ν123weights representing three costs, respectively, bn,m
Figure GDA00030543849800000310
Respectively representing the allocated bandwidth resources and computational resources of user equipment n in base stations m, CnIndicating the number of CPU cycles, P, required to process a task of a 1-bit user equipment nmecRepresenting the power of the mobile edge compute server;
Figure GDA0003054384980000041
representing the locally calculated time delay of the user equipment,
Figure GDA0003054384980000042
representing the local computing power of the user equipment n;
Figure GDA0003054384980000043
an offload delay, R, representing the offloading of a task by a user equipment n to a base station mn,mIndicating the uplink transmission rate, R, at which the user equipment n transmits the task to the base station mn,m=bn,mlog2(1+pnhn,m/(σ2+In,m)),pnRepresenting the uplink transmission power, h, of the user equipment nn,mRepresenting the gain of user equipment n to base station m,
Figure GDA0003054384980000044
representing the maximum tolerated delay of the user equipment n.
Optionally, the determining a bandwidth resource allocation policy b and a computing resource allocation policy f ° based on the set of matching pairs, the bandwidth resources of the computing nodes, and the computing resources comprises:
determining constraint conditions of bandwidth resource allocation decisions and calculation resource allocation decisions which are met by the operator revenue model;
and under the condition of meeting the constraint conditions of the bandwidth resource allocation decision and the calculation resource allocation decision, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining a bandwidth resource allocation strategy b and a calculation resource allocation strategy f according to the optimal solution.
Optionally, the operator revenue model is expressed as:
Figure GDA0003054384980000045
the constraint conditions of the bandwidth resource allocation decision and the computing resource allocation decision are represented as follows:
(1)
Figure GDA0003054384980000046
(2)
Figure GDA0003054384980000047
(3)
Figure GDA0003054384980000048
(4)
Figure GDA0003054384980000049
(5)
Figure GDA00030543849800000410
wherein B represents the total bandwidth of M base stations; fmecRepresenting the total computational resources of the M base stations.
Optionally, the constructing a user preference list based on the bandwidth resource allocation policy b, the computational resource allocation policy f ° and the offloading decision λ includes:
calculating a preference value of the user equipment n for the base station m based on the bandwidth resource allocation strategy b, the calculation resource allocation strategy f DEG and the unloading decision lambda;
and arranging the base stations M in an ascending order according to the preference values to construct a user preference list.
Optionally, the preference value of the user equipment n for the base station m is calculated based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ, and the following formula is adopted:
Figure GDA0003054384980000051
wherein phi isN(n, m) represents a preference value of the user equipment n for the base station m;
optionally, the constructing a base station preference list based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ includes:
calculating a preference value of the base station m for the user equipment n based on a bandwidth resource allocation strategy b, a calculation resource allocation strategy f DEG and an unloading decision lambda;
and arranging the user equipment N according to the preference values in a descending order, and constructing a preference list of the base station.
Optionally, the preference value of the base station m for the user equipment n is calculated based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ, and the following formula is adopted:
Figure GDA0003054384980000052
wherein phi isM(m, n) represents a preference value of the base station m for the user equipment n.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the technical scheme is that a mobile node and a computing node form a matching pair set, and a bandwidth resource allocation strategy b, a computing resource allocation strategy f DEG and an unloading decision lambda are determined based on the matching pair set, the bandwidth resource of the computing node and the computing resource, then a user preference list and a base station preference list are respectively constructed based on the bandwidth resource allocation strategy b, the computing resource allocation strategy f DEG and the unloading decision lambda, wherein the user preference list stores the corresponding relation of the mobile node connected with the computing node, the base station preference list stores the corresponding relation of the computing node connected with the mobile node, then when the mobile node establishes communication connection, the computing node is selected from the user preference list to send a task unloading request, after the computing node receives the task unloading request, if the mobile node is matched with the base station preference list when the connection number reaches the upper limit, disconnecting the mobile nodes which are not in the list and establishing communication connection with the mobile nodes so as to completely or partially unload the computing tasks of the mobile nodes to the corresponding computing nodes for running. According to the technical scheme, on the premise that the time delay requirement of each task in the edge computing network is met, the user association, the unloading decision, the broadband resource allocation strategy and the computing resource allocation strategy are jointly optimized, and for the differentiated computing tasks, the mobile node can be matched with the corresponding computing node to operate, so that the service quality of the mobile node is guaranteed, and the potential benefit maximization of an operator in the edge computing network is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other labels, objects and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 illustrates a scenario diagram of a user association and edge computing offload method to improve operator utility, according to an embodiment of the disclosure;
FIG. 2 illustrates a flow diagram of a user association and edge computing offload method to improve operator utility in accordance with an embodiment of the disclosure;
fig. 3 shows a flowchart for determining a bandwidth resource allocation policy b, a computing resource allocation policy f ° and an offloading decision λ based on a set of matching pairs, bandwidth resources of a computing node and computing resources according to an embodiment of the disclosure;
fig. 4 shows a flowchart for constructing a user preference list and a base station preference list based on a bandwidth resource allocation policy b, a computational resource allocation policy f ° and an offloading decision λ, respectively, according to an embodiment of the disclosure;
FIG. 5 illustrates a full flow diagram of a user association and edge computing offload method to improve operator utility in accordance with an embodiment of the disclosure;
FIG. 6 shows a schematic diagram of the number of iterations and operator utility in the computation offload method of the present disclosure;
fig. 7 and 8 respectively show operator utility comparison diagrams of the calculation unloading method of the present disclosure and two existing algorithms.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
First, terms related to the present disclosure are explained as follows.
The mobile node: there are terminal devices, such as User Equipment (UE), that compute tasks.
The computing node: a server device providing computing power, such as a Mobile Edge Computing (MEC) server.
And MEC: a wireless access network is utilized to provide services or cloud computing capacity required by the UE, so that a high-performance, low-delay and high-bandwidth service environment is created.
Calculating and unloading: and distributing the calculation task to the MEC server for processing, and then retrieving the calculated calculation result from the MEC server. Generally comprising the steps of: 1) MEC computing node discovery, 2) computing task segmentation, 3) unloading decision, 4) segmentation task transmission, 5) MEC computing node computation, and 6) computation result feedback.
And (3) unloading decision: the UE decides whether to offload the computation task and the offload fraction. Offloading decisions typically include: 1) local calculation: the calculation tasks are all completed locally at the UE, 2) the complete unloading: the calculation tasks are completely unloaded to the MEC server for processing, and 3) partial unloading is carried out: after the computing task is divided, part of the computing task is processed locally, and the rest of the computing task is unloaded to the MEC server for processing.
Time delay (i.e., time delay): when the calculation is not unloaded, the time delay is the time spent by the UE in executing the local calculation; when calculation unloading is carried out, the time delay is the sum of the transmission time of the unloaded data to the MEC server, the processing time of the operation of the MEC server and the transmission time of the transmission result of the MEC server.
Energy consumption: when the calculation unloading is not carried out, the energy consumption is the energy consumed by the UE for executing the local calculation; and when calculation unloading is carried out, the energy consumption is the sum of the transmission energy consumption of unloading data to the MEC server and the transmission energy consumption of receiving the transmission result of the MEC server.
And (3) computing resource allocation: according to whether the computing tasks meet the partition performance and the computing resources are distributed in parallel computing, the computing tasks which are not met are distributed to one computing node, and the computing tasks which are met are distributed to run on a plurality of computing nodes.
Fig. 1 illustrates a scenario diagram of a user association and edge computing offload method to improve operator utility, according to an embodiment of the disclosure.
As shown in fig. 1, a mobile node a, a mobile node b, and a mobile node c exist within the coverage area of the base station a. After the mobile node a, the mobile node b and the mobile node c establish communication connection with the base station a, all or part of the calculation tasks can be unloaded to the MEC server to which the base station a belongs for calculation, and the calculation results of the MEC server are received. Similarly, fig. 1 also shows that the mobile nodes a and B are also located in the coverage area of the base station B, and therefore, a communication connection may also be established with the base station B. The mobile node a is also located within the coverage area of the base station C, and thus, can also establish a communication connection with the base station C. The maximum number of mobile nodes that can be connected by the base station a, the base station B, and the base station C is Q, the shared bandwidth of the base station a, the base station B, and the base station C is B, and it is assumed that all the mobile nodes have completed uplink transmission power allocation.
In the present disclosure, for the mobile node a, according to the type of its computation task, different base stations may be connected, and the computation task is offloaded to the corresponding MEC server for computation. For a base station, the number of mobile nodes connected to the base station is limited, and the corresponding mobile nodes need to be selected and connected according to task types. In the application scenario, on the premise that the sum of energy consumption and time delay of UE is minimized by the aid of the offloading decision, the broadband resource allocation strategy and the computing resource allocation strategy are jointly optimized by taking the maximum operator profit as a target, and for differentiated computing tasks, the mobile node can be matched with the corresponding computing node to operate, so that service quality of the mobile node is guaranteed, and potential benefit maximization of the operator in an edge computing network is realized.
Fig. 2 illustrates a flow diagram of a user association and edge computing offload method to improve operator utility, according to an embodiment of the disclosure.
As shown in fig. 2, the method for offloading user association and edge computing for improving utility of an operator includes the following steps S101-S104.
In step S101, a matching pair set Φ formed by the mobile node and a plurality of surrounding computing nodes is acquired.
In the method, a mobile node is numbered first, then a base station with a coverage area containing the mobile node is determined, then a computing node is numbered, and the mobile node and the computing node are matched to form a matching pair set phi.
In step S102, a bandwidth resource allocation policy b, a computation resource allocation policy f ° and an offloading decision λ are determined based on the matching pair set, the bandwidth resources of the computation node and the computation resources.
In this disclosure, as shown in fig. 3, the determining an offload decision λ based on the matching pair set, the bandwidth resources of the computing node, and the computing resources includes the following steps S1021 to S1023:
s1021: constructing an operator revenue model based on the matched pair set, the bandwidth resources of the computing nodes and the computing resources;
s1022: determining an offloading decision constraint satisfied by the operator revenue model;
s1023: and under the condition of meeting the unloading decision constraint condition, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining an unloading decision lambda according to the optimal solution.
In the present disclosure, in step S1021, the operator profit model is expressed as:
Figure GDA0003054384980000091
in step S1022, the uninstall decision constraint is expressed as:
(1)
Figure GDA0003054384980000092
(2)
Figure GDA0003054384980000093
(3)
Figure GDA0003054384980000094
wherein the content of the first and second substances,
Figure GDA0003054384980000095
representing a set of base stations with marginal computing power, M representing base stations,
Figure GDA0003054384980000096
representing a set of user equipments in the network that need to do task offloading, N representing user equipments,
Figure GDA0003054384980000097
represents a set of user equipments associated with base station m;
Figure GDA0003054384980000098
total revenue for the operator;
Nmrepresents the number of user equipments associated with base station m; mu denotes the price charged by the operator per bit of data, lambdan,mRepresenting the duty-off ratio, W, of n users in m base stationsnRepresenting the value weight of the user equipment n, which can be customized according to the operator's classification of different tasks, DnRepresents the amount of data (in bits) of a task to be processed by the user equipment n;
Figure GDA0003054384980000101
representing the cost of the operator, including bandwidth resource cost, computational resource cost and energy resource cost;
ν123weights representing three costs, respectively, bn,m
Figure GDA0003054384980000102
Respectively representing the allocated bandwidth resources and computational resources of user equipment n in base stations m, CnIndicating the number of CPU cycles (cycles/bit) required to process a task of a 1-bit user equipment n, PmecRepresents the power (in W/CPU cycles) of the mobile edge computing server;
wherein the constraints (1) and (2) are time delay service quality constraints of the mobile node,constraint (3) indicates that the unload ratio is between 0-1,
Figure GDA0003054384980000103
representing the locally calculated time delay of the user equipment,
Figure GDA0003054384980000104
representing the local computing power of the user equipment n;
Figure GDA0003054384980000105
an offload delay, R, representing the offloading of a task by a user equipment n to a base station mn,mIndicating the uplink transmission rate, R, at which the user equipment n transmits the task to the base station mn,m=bn,mlog2(1+pnhn,m/(σ2+In,m)),pnRepresenting the uplink transmission power, h, of the user equipment nn,mRepresenting the gain of user equipment n to base station m,
Figure GDA0003054384980000106
representing the maximum tolerated delay of the user equipment n.
In step S1023, solving the optimal solution problem of the operator revenue model may be regarded as solving a linear planning problem, and may be solved by a dual simplex method, which specifically refers to the prior art, and is not described in detail in this disclosure.
In this disclosure, with continued reference to fig. 3, the determining a bandwidth resource allocation policy b and a computing resource allocation policy f ° based on the matching pair set, the bandwidth resources of the computing nodes, and the computing resources includes the following steps S1024-S1025:
s1024: determining constraint conditions of bandwidth resource allocation decisions and calculation resource allocation decisions which are met by the operator revenue model;
s1025: and under the condition of meeting the constraint conditions of the bandwidth resource allocation decision and the calculation resource allocation decision, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining a bandwidth resource allocation strategy b and a calculation resource allocation strategy f according to the optimal solution.
In the present disclosure, the operator profit model is expressed as follows, as in step S1021:
Figure GDA0003054384980000111
in step S1024, the constraint conditions of the bandwidth resource allocation decision and the computing resource allocation decision are expressed as:
(1)
Figure GDA0003054384980000112
(2)
Figure GDA0003054384980000113
(3)
Figure GDA0003054384980000114
(4)
Figure GDA0003054384980000115
(5)
Figure GDA0003054384980000116
wherein B represents the total bandwidth of M base stations; fmecRepresenting the total computational resources of the M base stations.
In step S1025, solving the optimal solution problem of the operator revenue model may be regarded as solving a geometric planning problem, which may be equivalently converted into a convex optimization problem form, and solved by an interior point method, which may specifically refer to the prior art, and this disclosure will not be described herein again.
In step S103, a user preference list and a base station preference list are respectively constructed based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ, wherein the user preference list stores a corresponding relationship of the mobile node to the calculation node, and the base station preference list stores a corresponding relationship of the calculation node to the mobile node.
In this disclosure, as shown in fig. 4, the constructing a user preference list based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ includes the following steps S1031 to S1032:
s1031: calculating a preference value of the user equipment n for the base station m based on the bandwidth resource allocation strategy b, the calculation resource allocation strategy f DEG and the unloading decision lambda;
s1032: and arranging the base stations M in an ascending order according to the preference values to construct a user preference list.
In this disclosure, in step S1031, that is, the preference value of the user equipment n for the base station m is calculated based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ, using the following formula:
Figure GDA0003054384980000121
wherein phi isN(n, m) represents a preference value of the user equipment n for the base station m.
In step S1032, the user preference list is sorted according to the ascending order of preference values, specifically, the preference list LN(n)=[1,…,m,…M]The corresponding preference values are ordered as
φM(n,1)≤…≤φM(n,m)≤…≤φM(n,M)。
In this disclosure, with continued reference to fig. 4, the constructing a preference list of base stations based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ includes the following steps S1033 to S1034:
s1033: calculating a preference value of the base station m for the user equipment n based on a bandwidth resource allocation strategy b, a calculation resource allocation strategy f DEG and an unloading decision lambda;
s1034: and arranging the user equipment N according to the preference values in a descending order, and constructing a preference list of the base station.
In this disclosure, in step S1033, that is, the preference value of the base station m for the user equipment n is calculated based on the bandwidth resource allocation policy b, the calculation resource allocation policy f ° and the offloading decision λ, using the following formula:
Figure GDA0003054384980000122
wherein phi isM(m, n) represents a preference value of the base station m for the user equipment n.
In step S1034, the preference lists of the base stations are arranged according to a descending order of preference values, specifically, the preference list LM(m)=[1,…,n,…N]The corresponding preference values are ordered as
φM(m,1)≥…≥φM(m,n)≥…≥φM(m,M)。
In step S104, when the mobile node establishes a communication connection, a computing node is selected from the user preference list to send a task offloading request, and after the computing node receives the task offloading request, when the connection number reaches an upper limit, if the mobile node matches the base station preference list, the mobile node that is not in the list is disconnected, and a communication connection with the mobile node is established.
In the present disclosure, the mobile node may send task offload requests to the computing node in sequence according to the ascending order of the base stations in the user preference list. The computing node may match the corresponding mobile node according to the order of the user equipments arranged in descending order in the preference list of the base station, and directly establish a communication connection when the number of connections of the base station does not reach the upper limit Q. And when the connection number of the base station reaches the upper limit Q, disconnecting the user equipment which is not in the list or disconnecting the user equipment which is sequenced according to the sequence of the user equipment in the preference list of the base station, and further accessing new user equipment, thereby improving the income of operators.
According to an embodiment of the present disclosure, the method further comprises: and if the base station preference list is not matched with the mobile node, updating a matching pair set formed by the mobile node and a plurality of surrounding computing nodes, and repeating the steps of constructing the user preference list and the base station preference list to establish communication connection with the mobile node.
In the method, for a mobile node which cannot be matched with any computing node, a matching pair set formed by the mobile node and a plurality of surrounding computing nodes is updated firstly, then the steps S102-S103 are executed, the updating of the user preference list and the base station preference list is completed, finally, the mobile node sends a task unloading request to the computing node according to the updated user preference list, the computing node establishes communication connection with the mobile node according to the updated base station preference list, the unmatched mobile node is finally matched with the corresponding computing node, the computing task unloading is carried out, and the income of an operator is improved.
Fig. 5 illustrates a complete flow diagram of a user association and edge computing offload method to improve operator utility in accordance with an embodiment of the disclosure.
As shown in fig. 5, when the method for offloading user association and edge computation for improving utility of an operator is executed, the parameters b, f °, λ, and Φ are initialized first, where the user equipment may allocate bandwidth resources and computation resources equally as initial values of b and f °. The ue may randomly take a value in the (0, 1) interval as an initial value of the offloading decision. And the user equipment selects the base stations which are associated in a close range to pair as the initial value of the matching pair set phi. Setting two iterators T1 and T2 while initializing the parameters, and setting the maximum iteration number as T;
secondly, calculating the task unloading ratio of each user equipment according to the operator profit model established in the step S1021, and updating lambda;
then, according to the updated lambda, calculating a resource allocation strategy of each user equipment, updating a broadband resource strategy b and a calculation resource strategy f DEG, and adding 1 to an iterator t 1;
then judging whether the parameters lambda, b and f degrees are converged or whether the iteration times reach T, if not, continuously repeating the step of updating lambda, b and f degrees, if so, respectively calculating preference function values of the user equipment and the base station to obtain a user preference list and a base station preference list;
selecting unmatched user equipment, sending a task unloading request to a base station based on a user preference list, determining whether the connection number reaches an upper limit by the base station, if not, receiving the task unloading request, if so, screening all the user equipment requesting to establish connection according to the base station preference list, and updating the user preference list of the rejected user equipment;
judging whether all the user equipment are completely matched, if so, finishing the matching, updating phi, adding 1 to the iterator t2, if not, returning to the step of selecting the user equipment which is not matched, sending a task unloading request to the base station based on the user preference list, and continuing to execute the subsequent steps after all the user equipment are matched;
and finally, judging whether phi converges or the iteration number reaches T, if so, ending the whole process, otherwise, returning to the step of calculating the task unloading ratio of each user equipment and updating lambda, and repeatedly updating iterators T1 and T2 until the whole process ends.
Fig. 6 shows a schematic diagram of the number of iterations and the utility of the operator in the calculation offloading method of the present disclosure. As can be seen from the overall curve of fig. 6, convergence is achieved when the number of iterations is about 5 in the calculation offloading method of the present disclosure.
Fig. 7 and 8 respectively show operator utility comparison diagrams of the calculation unloading method of the present disclosure and two existing algorithms. In fig. 7 and 8, the comparison algorithm 1 is a mode of performing no user association decision optimization and adopting a determined pre-allocation of the base station according to the distance, and the comparison algorithm 2 is a mode of performing no resource allocation optimization and adopting a mode of equally dividing all bandwidth resources and computing resources for all users. As can be seen from the simulation results of fig. 7, the utility of the operator after the calculation offloading method of the present disclosure is significantly better than that of the comparative algorithms 1 and 2, and the advantage is more significant as the number of cells increases.
As can be seen from fig. 8, when the computing resources of the server are less than 1.5 x 109In time, the operator utility obtained by the disclosed computation offload method and existing algorithms both increases with the increase in computational resources. As the computing resources of the server continue to increase, the operator utility achieved by the computing offload method and comparative algorithm 1 of the present disclosure does not vary much, as the computing resources increaseAlthough the processing delay of the edge server can be reduced when the data is added to a certain degree, the transmission delay cannot be offset, so that the data volume of unloading cannot be increased, and the utility change of an operator tends to be stable. The comparison algorithm 2 employs an average resource allocation, which wastes a large amount of computing resources, increases the cost of the operator, and causes a decrease in utility. Fig. 7 and 8 verify the importance of jointly optimizing resource allocation and user association to improve operator utility in edge-computing wireless networks.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present disclosure is not limited to the specific combination of the above-mentioned features, but also covers other embodiments formed by any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. The user association and edge computing offloading method for improving utility of an operator is characterized by comprising the following steps:
acquiring a matching pair set formed by a mobile node and a plurality of surrounding computing nodes;
constructing an operator revenue model based on the matched pair set, the bandwidth resources of the computing nodes and the computing resources; determining an offloading decision constraint satisfied by the operator revenue model; determining an offload decision λ with a goal of maximizing the operator revenue under a condition that the offload decision constraint is met; determining constraint conditions of bandwidth resource allocation decisions and calculation resource allocation decisions which are met by the operator revenue model; determining a bandwidth resource allocation policy b and a computational resource allocation policy f with the goal of maximizing the operator revenue under the condition of meeting the constraint conditions of the bandwidth resource allocation decision and the computational resource allocation decisiono
Based on bandwidth resource allocation strategy b and computing resource allocation strategyfoThe method comprises the steps that an unloading decision lambda is established, a user preference list and a base station preference list are respectively established, wherein the user preference list stores the corresponding relation of a mobile node connected with a computing node, and the base station preference list stores the corresponding relation of the computing node connected with the mobile node;
and when the mobile node establishes communication connection, selecting a computing node from the user preference list to send a task unloading request, and after receiving the task unloading request, if the connection number reaches an upper limit, and the mobile node is matched with the base station preference list, disconnecting the mobile node which is not in the list, and establishing communication connection with the mobile node.
2. The method of claim 1, further comprising: and if the base station preference list is not matched with the mobile node, updating a matching pair set formed by the mobile node and a plurality of surrounding computing nodes, and repeating the steps of constructing the user preference list and the base station preference list to establish communication connection with the mobile node.
3. The method of claim 1, wherein determining an offload decision λ based on the set of matched pairs, bandwidth resources of the compute node, and computational resources comprises:
constructing an operator revenue model based on the matched pair set, the bandwidth resources of the computing nodes and the computing resources;
determining an offloading decision constraint satisfied by the operator revenue model;
and under the condition of meeting the unloading decision constraint condition, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining an unloading decision lambda according to the optimal solution.
4. The method of claim 3, wherein the operator revenue model is expressed as:
Figure FDA0003054384970000021
the offload decision constraint is expressed as:
(1)
Figure FDA0003054384970000022
(2)
Figure FDA0003054384970000023
(3)
Figure FDA0003054384970000024
wherein the content of the first and second substances,
Figure FDA0003054384970000025
representing a set of base stations with marginal computing power, M representing base stations,
Figure FDA0003054384970000026
representing a set of user equipments in the network that need to do task offloading, N representing user equipments,
Figure FDA0003054384970000027
represents a set of user equipments associated with base station m;
Figure FDA0003054384970000028
total revenue for the operator;
Nmrepresents the number of user equipments associated with base station m; mu denotes the price charged by the operator per bit of data, lambdan,mRepresenting the duty-off ratio, W, of n users in m base stationsnRepresenting the value weight of the user equipment n, which can be customized according to the operator's classification of different tasks, DnRepresenting the amount of data of a task to be processed by the user equipment n;
Figure FDA0003054384970000029
representing the cost of the operator, including bandwidth resource cost, computational resource cost and energy resource cost;
v1,v2,v3weights representing three costs, respectively, bn,m
Figure FDA00030543849700000210
Respectively representing the allocated bandwidth resources and computational resources of user equipment n in base stations m, CnIndicating the number of CPU cycles, P, required to process a task of a 1-bit user equipment nmecRepresenting the power of the mobile edge compute server;
Figure FDA00030543849700000211
representing the locally calculated time delay of the user equipment,
Figure FDA00030543849700000212
representing the local computing power of the user equipment n;
Figure FDA00030543849700000213
an offload delay, R, representing the offloading of a task by a user equipment n to a base station mn,mIndicating the uplink transmission rate, R, at which the user equipment n transmits the task to the base station mn,m=bn,mlog2(1+pnhn,m/(σ2+In,m)),pnRepresenting the uplink transmission power, h, of the user equipment nn,mRepresenting the gain of user equipment n to base station m,
Figure FDA0003054384970000031
representing the maximum tolerated delay of the user equipment n.
5. The method of claim 3,determining a bandwidth resource allocation policy b and a computational resource allocation policy f based on the set of matching pairs, the bandwidth resources of the computational nodes and the computational resourcesoThe method comprises the following steps:
determining constraint conditions of bandwidth resource allocation decisions and calculation resource allocation decisions which are met by the operator revenue model;
under the condition of meeting the constraint conditions of the bandwidth resource allocation decision and the calculation resource allocation decision, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining a bandwidth resource allocation strategy b and a calculation resource allocation strategy f according to the optimal solutiono
6. The method of claim 3, wherein the operator revenue model is expressed as:
Figure FDA0003054384970000032
the constraint conditions of the bandwidth resource allocation decision and the computing resource allocation decision are represented as follows:
(I)
Figure FDA0003054384970000033
(2)
Figure FDA0003054384970000034
(3)
Figure FDA0003054384970000035
(4)
Figure FDA0003054384970000036
(5)
Figure FDA0003054384970000037
wherein B represents the total bandwidth of M base stations; fmecRepresenting the total computational resources of the M base stations.
7. The method of claim 6, wherein the bandwidth-based resource allocation policy b and the computational resource allocation policy f are defined asoAnd an offload decision λ, constructing a user preference list, comprising:
based on bandwidth resource allocation strategy b and calculation resource allocation strategy foAnd calculating the preference value of the user equipment n to the base station m by the unloading decision lambda;
and arranging the base stations M in an ascending order according to the preference values to construct a user preference list.
8. The method of claim 7, wherein the bandwidth-based resource allocation policy b and the computational resource allocation policy f are defined asoAnd calculating the preference value of the user equipment n to the base station m by using the unloading decision lambda, wherein the following formula is adopted:
Figure FDA0003054384970000041
wherein phi isN(n, m) represents a preference value of the user equipment n for the base station m.
9. The method of claim 6, wherein the bandwidth-based resource allocation policy b and the computational resource allocation policy f are defined asoAnd an offload decision λ, constructing a base station preference list, comprising:
based on bandwidth resource allocation strategy b and calculation resource allocation strategy foCalculating the preference value of the base station m to the user equipment n by the unloading decision lambda;
and arranging the user equipment N according to the preference values in a descending order, and constructing a preference list of the base station.
10. The method of claim 9, wherein the bandwidth-based resourceDistribution strategy b and calculation resource distribution strategy foAnd calculating the preference value of the base station m for the user equipment n by using the unloading decision lambda, wherein the following formula is adopted:
Figure FDA0003054384970000042
wherein phi isM(m, n) represents a preference value of the base station m for the user equipment n.
CN202010019094.2A 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator Active CN111182570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010019094.2A CN111182570B (en) 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010019094.2A CN111182570B (en) 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator

Publications (2)

Publication Number Publication Date
CN111182570A CN111182570A (en) 2020-05-19
CN111182570B true CN111182570B (en) 2021-06-22

Family

ID=70652612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010019094.2A Active CN111182570B (en) 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator

Country Status (1)

Country Link
CN (1) CN111182570B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111884829B (en) * 2020-06-19 2022-07-29 西安电子科技大学 Method for maximizing profit of multi-unmanned aerial vehicle architecture
CN112202847B (en) * 2020-09-14 2022-03-22 重庆邮电大学 Server resource allocation method based on mobile edge calculation
CN112491957B (en) * 2020-10-27 2021-10-08 西安交通大学 Distributed computing unloading method and system under edge network environment
CN113015217B (en) * 2021-02-07 2022-05-20 重庆邮电大学 Edge cloud cooperation low-cost online multifunctional business computing unloading method
CN113206876B (en) * 2021-04-28 2023-01-06 南京航空航天大学 Service redeployment method for dual-label perception in mobile edge computing environment
CN113453216B (en) * 2021-06-16 2023-09-05 中国联合网络通信集团有限公司 Method and device for determining user terminal equipment
CN113377516B (en) * 2021-06-22 2022-10-25 华南理工大学 Centralized scheduling method and system for unloading vehicle tasks facing edge computing
CN114375011B (en) * 2021-06-28 2022-09-20 山东华科信息技术有限公司 Matching theory-based power distribution Internet of things task unloading method
CN113660303B (en) * 2021-07-02 2024-03-22 山东师范大学 Task unloading method and system for end-edge network cloud cooperation
CN113608848B (en) * 2021-07-28 2024-02-27 西北大学 Cloud-edge cooperative edge computing task allocation method, system and storage medium
CN113961266B (en) * 2021-10-14 2023-08-22 湘潭大学 Task unloading method based on bilateral matching under edge cloud cooperation

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9775045B2 (en) * 2015-09-11 2017-09-26 Intel IP Corporation Slicing architecture for wireless communication
CN106534333B (en) * 2016-11-30 2019-07-12 北京邮电大学 A kind of two-way choice calculating discharging method based on MEC and MCC
US10037231B1 (en) * 2017-06-07 2018-07-31 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system for jointly determining computational offloading and content prefetching in a cellular communication system
EP3457664B1 (en) * 2017-09-14 2019-11-06 Deutsche Telekom AG Method and system for finding a next edge cloud for a mobile user
CN107819840B (en) * 2017-10-31 2020-05-26 北京邮电大学 Distributed mobile edge computing unloading method in ultra-dense network architecture
CN108600299B (en) * 2018-03-02 2021-08-24 中国科学院上海微系统与信息技术研究所 Distributed multi-user computing task unloading method and system
US11388222B2 (en) * 2018-05-04 2022-07-12 Verizon Patent And Licensing Inc. Mobile edge computing
CN109814951B (en) * 2019-01-22 2021-09-28 南京邮电大学 Joint optimization method for task unloading and resource allocation in mobile edge computing network
CN109819046B (en) * 2019-02-26 2021-11-02 重庆邮电大学 Internet of things virtual computing resource scheduling method based on edge cooperation
CN109788069B (en) * 2019-02-27 2021-02-12 电子科技大学 Computing unloading method based on mobile edge computing in Internet of things
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN110505644B (en) * 2019-09-26 2021-09-10 江南大学 User task unloading and resource allocation joint optimization method

Also Published As

Publication number Publication date
CN111182570A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111182570B (en) User association and edge computing unloading method for improving utility of operator
WO2022121097A1 (en) Method for offloading computing task of mobile user
CN109947545B (en) Task unloading and migration decision method based on user mobility
CN111586720B (en) Task unloading and resource allocation combined optimization method in multi-cell scene
CN111586696B (en) Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning
CN111163519B (en) Wireless body area network resource allocation and task offloading method with maximized system benefit
CN110098969B (en) Fog computing task unloading method for Internet of things
CN111132191B (en) Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server
CN111641973B (en) Load balancing method based on fog node cooperation in fog computing network
Zhao et al. Task proactive caching based computation offloading and resource allocation in mobile-edge computing systems
CN112105062A (en) Mobile edge computing network energy consumption minimization strategy method under time-sensitive condition
CN110505644A (en) User task unloading and resource allocation joint optimization method under 5G super-intensive heterogeneous network
CN114126066B (en) MEC-oriented server resource allocation and address selection joint optimization decision method
CN111200831B (en) Cellular network computing unloading method fusing mobile edge computing
CN111836284B (en) Energy consumption optimization calculation and unloading method and system based on mobile edge calculation
Krolikowski et al. A decomposition framework for optimal edge-cache leasing
Zhang et al. DMRA: A decentralized resource allocation scheme for multi-SP mobile edge computing
CN113918240A (en) Task unloading method and device
Krolikowski et al. Optimal cache leasing from a mobile network operator to a content provider
CN113992677A (en) MEC calculation unloading method for delay and energy consumption joint optimization
CN115396953A (en) Calculation unloading method based on improved particle swarm optimization algorithm in mobile edge calculation
Sun et al. A joint learning and game-theoretic approach to multi-dimensional resource management in fog radio access networks
Wang et al. Joint service caching, resource allocation and computation offloading in three-tier cooperative mobile edge computing system
Lan et al. A hierarchical game for joint wireless and cloud resource allocation in mobile edge computing system
Chen et al. User-centric cooperative MEC service offloading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant