CN111182570A - User association and edge computing unloading method for improving utility of operator - Google Patents

User association and edge computing unloading method for improving utility of operator Download PDF

Info

Publication number
CN111182570A
CN111182570A CN202010019094.2A CN202010019094A CN111182570A CN 111182570 A CN111182570 A CN 111182570A CN 202010019094 A CN202010019094 A CN 202010019094A CN 111182570 A CN111182570 A CN 111182570A
Authority
CN
China
Prior art keywords
resource allocation
base station
computing
operator
user equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010019094.2A
Other languages
Chinese (zh)
Other versions
CN111182570B (en
Inventor
景文鹏
张慧雯
路兆铭
温向明
张晶壹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202010019094.2A priority Critical patent/CN111182570B/en
Publication of CN111182570A publication Critical patent/CN111182570A/en
Application granted granted Critical
Publication of CN111182570B publication Critical patent/CN111182570B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/18Negotiating wireless communication parameters
    • H04W28/20Negotiating bandwidth

Abstract

The embodiment of the disclosure discloses a user association and edge computing unloading method for improving utility of an operator, which comprises the following steps: acquiring a matching pair set formed by a mobile node and a plurality of surrounding computing nodes; determining a bandwidth resource allocation strategy b and a computing resource allocation strategy f based on the matching pair set, the bandwidth resources of the computing nodes and the computing resources°Unloading decision lambda, respectively constructing a user preference list and a base station preference list; and when the mobile node establishes communication connection, selecting a computing node from the user preference list to send a task unloading request, and after receiving the task unloading request, if the connection number reaches an upper limit, and the mobile node is matched with the base station preference list, disconnecting the mobile node which is not in the list, and establishing communication connection with the mobile node. The technical scheme not only ensures the service quality of the mobile node, but also realizesThe potential benefits of the operator in the edge computing network are maximized.

Description

User association and edge computing unloading method for improving utility of operator
Technical Field
The disclosure relates to the technical field of communication networks, in particular to a user association and edge computing offloading method for improving utility of an operator.
Background
With the popularization of intelligent mobile devices, computation-intensive and delay-sensitive applications such as augmented reality, virtual reality, online games and the like are emerging and are rapidly favored by users. However, the limited computing resources and battery capacity of mobile devices have difficulty meeting the performance requirements of such applications. Meanwhile, the high transmission delay caused by the traditional mobile cloud computing technology is not friendly to the delay sensitive tasks. In order to solve this problem, Mobile Edge Computing (MEC) server has come to work with wireless access points or small base stations to pull cloud Computing resources to the user side, which can reduce transmission delay. User Equipment (UE) can transmit all or part of the computation-intensive tasks to the MEC server for execution by computation offloading, thereby achieving the purpose of alleviating computation and prolonging the battery life of the device. However, the MEC server does not have as rich resources as the cloud server, and how to make effective offloading decisions and resource configuration is crucial.
In order to improve spectral efficiency and quality of service to edge users, network deployment tends to be dense, and the density of MEC servers integrated with access units also increases dramatically. The user selects a proper MEC server to perform calculation unloading, namely user association, which is important for improving the service satisfaction of the user and the benefit of an operator. In conventional communication networks, user association decisions are made based on transmission bandwidth, transmission power and inter-cell interference, which directly affect the communication rate. In the edge computing network, since the limited computing power of the MEC cannot support excessive computing tasks, the user association decision should also take into account the influence of factors such as server computing resources, data volume of the offloaded tasks, and task delay requirements.
Generally, the service requested by the user from the edge server is charged by the operator. From the operator's perspective, higher potential revenue will encourage them to provide better service to the user. However, different tasks have different resource requirements on the edge server, and if the task types are not distinguished, the service satisfaction of the user is greatly reduced by adopting a fixed charging mode. It is therefore necessary to charge different fees for different types of tasks in combination with the actual need.
Most of the current mobile edge computing offloading schemes mostly use the energy consumption or the time delay of the user equipment as a performance optimization index, and neglect the optimization of potential benefits of operators. In addition, most of the existing schemes focus on unloading decision or computing resource management, and less schemes for performing joint optimization on user association are available. In particular, a gap exists in the method of computation offload for task diversity.
Disclosure of Invention
In order to solve the problems in the related art, the embodiments of the present disclosure provide a user association and edge computing offloading method for improving the utility of an operator.
Specifically, the method comprises the following steps:
acquiring a matching pair set formed by a mobile node and a plurality of surrounding computing nodes;
determining a bandwidth resource allocation strategy b and a computing resource allocation strategy f based on the matching pair set, the bandwidth resources of the computing nodes and the computing resourcesoAnd an offload decision λ;
based on bandwidth resource allocation strategy b and calculation resource allocation strategy foThe method comprises the steps that an unloading decision lambda is established, a user preference list and a base station preference list are respectively established, wherein the user preference list stores the corresponding relation of a mobile node connected with a computing node, and the base station preference list stores the corresponding relation of the computing node connected with the mobile node;
and when the mobile node establishes communication connection, selecting a computing node from the user preference list to send a task unloading request, and after receiving the task unloading request, if the connection number reaches an upper limit, and the mobile node is matched with the base station preference list, disconnecting the mobile node which is not in the list, and establishing communication connection with the mobile node.
Optionally, the method further comprises: and if the base station preference list is not matched with the mobile node, updating a matching pair set formed by the mobile node and a plurality of surrounding computing nodes, and repeating the steps of constructing the user preference list and the base station preference list to establish communication connection with the mobile node.
Optionally, the determining an offload decision λ based on the set of matching pairs, bandwidth resources of the compute node, and computational resources includes:
constructing an operator revenue model based on the matched pair set, the bandwidth resources of the computing nodes and the computing resources;
determining an offloading decision constraint satisfied by the operator revenue model;
and under the condition of meeting the unloading decision constraint condition, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining an unloading decision lambda according to the optimal solution.
Optionally, the operator benefit model is represented as:
Figure BDA0002360055010000031
the offload decision constraint is expressed as:
Figure BDA0002360055010000032
Figure BDA0002360055010000033
Figure BDA0002360055010000034
wherein the content of the first and second substances,
Figure BDA0002360055010000035
representing a set of base stations with marginal computing power, M representing base stations,
Figure BDA0002360055010000036
representing a set of user equipments in the network that need to do task offloading, N representing user equipments,
Figure BDA0002360055010000037
represents a set of user equipments associated with base station m;
Figure BDA0002360055010000038
total revenue for the operator;
Nmrepresents the number of user equipments associated with base station m; mu denotes the price charged by the operator per bit of data, lambdan.mRepresenting the duty-off ratio, W, of n users in m base stationsnRepresenting the value weight of the user equipment n, which can be customized according to the operator's classification of different tasks, DnRepresenting the amount of data of a task to be processed by the user equipment n;
Figure BDA0002360055010000039
representing the cost of the operator, including bandwidth resource cost, computational resource cost and energy resource cost;
ν123weights representing three costs, respectively, bn,m
Figure BDA00023600550100000310
Respectively representing the allocated bandwidth resources and computational resources of user equipment n in base stations m, CnIndicating the number of CPU cycles, P, required to process a task of a 1-bit user equipment nmecRepresenting the power of the mobile edge compute server;
Figure BDA00023600550100000311
representing the locally calculated time delay of the user equipment,
Figure BDA00023600550100000312
representing the local computing power of the user equipment n;
Figure BDA00023600550100000313
an offload delay, R, representing the offloading of a task by a user equipment n to a base station mn,mUplink transmission rate indicating the rate at which user equipment n transmits a task to base station m, Rn, m ═ bn,mlog2(1+pnhn,mσ2+In,m) Pn denotes the uplink transmission power of user equipment n, hn,mRepresenting the gain of user equipment n to base station m,
Figure BDA0002360055010000041
representing the maximum tolerated delay of the user equipment n.
Optionally, the determining a bandwidth resource allocation policy b and a computing resource allocation policy f based on the set of matching pairs, the bandwidth resources of the computing nodes, and the computing resourcesoThe method comprises the following steps:
determining constraint conditions of bandwidth resource allocation decisions and calculation resource allocation decisions which are met by the operator revenue model;
under the condition of meeting the constraint conditions of the bandwidth resource allocation decision and the calculation resource allocation decision, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining a bandwidth resource allocation strategy b and a calculation resource allocation strategy f according to the optimal solutiono
Optionally, the operator benefit model is represented as:
Figure BDA0002360055010000042
the constraint conditions of the bandwidth resource allocation decision and the computing resource allocation decision are represented as follows:
Figure BDA0002360055010000043
Figure BDA0002360055010000044
Figure BDA0002360055010000045
Figure BDA0002360055010000046
Figure BDA0002360055010000047
wherein B represents the total bandwidth of M base stations; fmecRepresenting the total computational resources of the M base stations.
Optionally, the bandwidth-based resource allocation policy b and the computing resource allocation policy foAnd an offload decision λ, constructing a user preference list, comprising:
based on bandwidth resource allocation strategy b and calculation resource allocation strategy foAnd calculating the preference value of the user equipment n to the base station m by the unloading decision lambda;
and arranging the base stations M in an ascending order according to the preference values to construct a user preference list.
Optionally, the bandwidth-based resource allocation policy b and the computing resource allocation policy foAnd calculating the preference value of the user equipment n to the base station m by using the unloading decision lambda, wherein the following formula is adopted:
Figure BDA0002360055010000051
wherein phi isN(n, m) represents a preference value of the user equipment n for the base station m;
optionally, the bandwidth-based resource allocation policy b and the computing resource allocation policy foAnd an offload decision λ, constructing a base station preference list, comprising:
based on bandwidth resource allocation strategy b and calculation resource allocation strategy foCalculating the preference value of the base station m to the user equipment n by the unloading decision lambda;
and arranging the user equipment N according to the preference values in a descending order, and constructing a preference list of the base station.
Optionally, the bandwidth-based resource allocation policy b and the computing resource allocation policy foAnd calculating the preference value of the base station m for the user equipment n by using the unloading decision lambda, wherein the following formula is adopted:
Figure BDA0002360055010000052
wherein phi isM(m, n) represents a preference value of the base station m for the user equipment n.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the technical scheme is that a mobile node and a computing node form a matching pair set, and a bandwidth resource allocation strategy b and a computing resource allocation strategy f are determined based on the matching pair set, the bandwidth resources of the computing node and the computing resourcesoAnd unloading the decision lambda, and then calculating the resource allocation strategy f based on the bandwidth resource allocation strategy boAnd unloading decision lambda which respectively constructs a user preference list and a base station preference list, wherein the user preference list stores the corresponding relation of the mobile node connected with the computing node, the base station preference list stores the corresponding relation of the computing node connected with the mobile node, then when the mobile node establishes communication connection, the computing node is selected from the user preference list to send a task unloading request, after the computing node receives the task unloading request, when the connection number reaches the upper limit, if the mobile node is matched with the base station preference list, the mobile node which is not in the list is disconnected, and the communication connection with the mobile node is established, so that the computing task of the mobile node is completely or partially unloaded to the corresponding computing node to run. The technical scheme optimizes user association, unloading decision, broadband resource allocation strategy and computing resource allocation by combination on the premise of meeting the time delay requirement of each task in the edge computing networkThe strategy is matched, and for differentiated computing tasks, the mobile node can be matched with the corresponding computing node to operate, so that the service quality of the mobile node is guaranteed, and the potential benefit maximization of an operator in an edge computing network is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other labels, objects and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 illustrates a scenario diagram of a user association and edge computing offload method to improve operator utility, according to an embodiment of the disclosure;
FIG. 2 illustrates a flow diagram of a user association and edge computing offload method to improve operator utility in accordance with an embodiment of the disclosure;
FIG. 3 illustrates determining a bandwidth resource allocation policy b, a computing resource allocation policy f based on a set of matching pairs, bandwidth resources of a computing node, and computing resources according to an embodiment of the disclosureoAnd a flow chart of an offload decision λ;
FIG. 4 illustrates a bandwidth-based resource allocation policy b, a computational resource allocation policy f, according to an embodiment of the disclosureoUnloading decision lambda, and respectively constructing a flow chart of a user preference list and a base station preference list;
FIG. 5 illustrates a full flow diagram of a user association and edge computing offload method to improve operator utility in accordance with an embodiment of the disclosure;
FIG. 6 shows a schematic diagram of the number of iterations and operator utility in the computation offload method of the present disclosure;
fig. 7 and 8 respectively show operator utility comparison diagrams of the calculation unloading method of the present disclosure and two existing algorithms.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
First, terms related to the present disclosure are explained as follows.
The mobile node: there are terminal devices, such as User Equipment (UE), that compute tasks.
The computing node: a server device providing computing power, such as a Mobile Edge Computing (MEC) server.
And MEC: a wireless access network is utilized to provide services or cloud computing capacity required by the UE, so that a high-performance, low-delay and high-bandwidth service environment is created.
Calculating and unloading: and distributing the calculation task to the MEC server for processing, and then retrieving the calculated calculation result from the MEC server. Generally comprising the steps of: 1) MEC computing node discovery, 2) computing task segmentation, 3) unloading decision, 4) segmentation task transmission, 5) MEC computing node computation, and 6) computation result feedback.
And (3) unloading decision: the UE decides whether to offload the computation task and the offload fraction. Offloading decisions typically include: 1) local calculation: the calculation tasks are all completed locally at the UE, 2) the complete unloading: the calculation tasks are completely unloaded to the MEC server for processing, and 3) partial unloading is carried out: after the computing task is divided, part of the computing task is processed locally, and the rest of the computing task is unloaded to the MEC server for processing.
Time delay (i.e., time delay): when the calculation is not unloaded, the time delay is the time spent by the UE in executing the local calculation; when calculation unloading is carried out, the time delay is the sum of the transmission time of the unloaded data to the MEC server, the processing time of the operation of the MEC server and the transmission time of the transmission result of the MEC server.
Energy consumption: when the calculation unloading is not carried out, the energy consumption is the energy consumed by the UE for executing the local calculation; and when calculation unloading is carried out, the energy consumption is the sum of the transmission energy consumption of unloading data to the MEC server and the transmission energy consumption of receiving the transmission result of the MEC server.
And (3) computing resource allocation: according to whether the computing tasks meet the partition performance and the computing resources are distributed in parallel computing, the computing tasks which are not met are distributed to one computing node, and the computing tasks which are met are distributed to run on a plurality of computing nodes.
Fig. 1 illustrates a scenario diagram of a user association and edge computing offload method to improve operator utility, according to an embodiment of the disclosure.
As shown in fig. 1, a mobile node a, a mobile node b, and a mobile node c exist within the coverage area of the base station a. After the mobile node a, the mobile node b and the mobile node c establish communication connection with the base station a, all or part of the calculation tasks can be unloaded to the MEC server to which the base station a belongs for calculation, and the calculation results of the MEC server are received. Similarly, fig. 1 also shows that the mobile nodes a and B are also located in the coverage area of the base station B, and therefore, a communication connection may also be established with the base station B. The mobile node a is also located within the coverage area of the base station C, and thus, can also establish a communication connection with the base station C. The maximum number of mobile nodes that can be connected by the base station a, the base station B, and the base station C is Q, the shared bandwidth of the base station a, the base station B, and the base station C is B, and it is assumed that all the mobile nodes have completed uplink transmission power allocation.
In the present disclosure, for the mobile node a, according to the type of its computation task, different base stations may be connected, and the computation task is offloaded to the corresponding MEC server for computation. For a base station, the number of mobile nodes connected to the base station is limited, and the corresponding mobile nodes need to be selected and connected according to task types. In the application scenario, on the premise that the sum of energy consumption and time delay of UE is minimized by the aid of the offloading decision, the broadband resource allocation strategy and the computing resource allocation strategy are jointly optimized by taking the maximum operator profit as a target, and for differentiated computing tasks, the mobile node can be matched with the corresponding computing node to operate, so that service quality of the mobile node is guaranteed, and potential benefit maximization of the operator in an edge computing network is realized.
Fig. 2 illustrates a flow diagram of a user association and edge computing offload method to improve operator utility, according to an embodiment of the disclosure.
As shown in fig. 2, the method for offloading user association and edge computing for improving utility of an operator includes the following steps S101-S104.
In step S101, a matching pair set Φ formed by the mobile node and a plurality of surrounding computing nodes is acquired.
In the method, a mobile node is numbered first, then a base station with a coverage area containing the mobile node is determined, then a computing node is numbered, and the mobile node and the computing node are matched to form a matching pair set phi.
In step S102, a bandwidth resource allocation policy b and a computing resource allocation policy f are determined based on the matching pair set, the bandwidth resources of the computing nodes, and the computing resourcesoAnd an offload decision λ.
In this disclosure, as shown in fig. 3, the determining an offload decision λ based on the matching pair set, the bandwidth resources of the computing node, and the computing resources includes the following steps S1021 to S1023:
s1021: constructing an operator revenue model based on the matched pair set, the bandwidth resources of the computing nodes and the computing resources;
s1022: determining an offloading decision constraint satisfied by the operator revenue model;
s1023: and under the condition of meeting the unloading decision constraint condition, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining an unloading decision lambda according to the optimal solution.
In this disclosure, in step S1021, the operator benefit model is expressed as:
Figure BDA0002360055010000091
in step S1022, the uninstall decision constraint is expressed as:
Figure BDA0002360055010000092
Figure BDA0002360055010000093
Figure BDA0002360055010000094
wherein the content of the first and second substances,
Figure BDA0002360055010000095
representing a set of base stations with marginal computing power, M representing base stations,
Figure BDA0002360055010000096
representing a set of user equipments in the network that need to do task offloading, N representing user equipments,
Figure BDA0002360055010000097
represents a set of user equipments associated with base station m;
Figure BDA0002360055010000098
total revenue for the operator;
Nmrepresents the number of user equipments associated with base station m; mu denotes the price charged by the operator per bit of data, lambdan.mRepresenting the duty-off ratio, W, of n users in m base stationsnRepresenting the value weight of the user equipment n, which can be customized according to the operator's classification of different tasks, DnIndicating where user equipment n is to beData size (in bits) of a physical task;
Figure BDA0002360055010000099
representing the cost of the operator, including bandwidth resource cost, computational resource cost and energy resource cost;
ν123weights representing three costs, respectively, bn,m
Figure BDA00023600550100000910
Respectively representing the allocated bandwidth resources and computational resources of user equipment n in base stations m, CnIndicating the number of CPU cycles (cycles/bit) required to process a task of a 1-bit user equipment n, PmecRepresents the power (in W/CPU cycles) of the mobile edge computing server;
wherein the constraints (1) and (2) are delay service quality constraints of the mobile node, the constraint (3) represents that the unloading ratio is between 0 and 1,
Figure BDA0002360055010000101
representing the locally calculated time delay of the user equipment,
Figure BDA0002360055010000102
representing the local computing power of the user equipment n;
Figure BDA0002360055010000103
an offload delay, R, representing the offloading of a task by a user equipment n to a base station mn,mIndicating the uplink transmission rate, R, at which the user equipment n transmits the task to the base station mn,m=bn,mlog2(1+pnhn,m2+In,m),pnRepresenting the uplink transmission power, h, of the user equipment nn,mRepresenting the gain of user equipment n to base station m,
Figure BDA0002360055010000104
representing the maximum tolerated delay of the user equipment n.
In step S1023, solving the optimal solution problem of the operator revenue model may be regarded as solving a linear planning problem, and may be solved by a dual simplex method, which specifically refers to the prior art, and is not described in detail in this disclosure.
In this disclosure, with continued reference to fig. 3, a bandwidth resource allocation policy b and a computation resource allocation policy f are determined based on the matching pair set, the bandwidth resources of the computation node, and the computation resourcesoComprising the following steps S1024-S1025:
s1024: determining constraint conditions of bandwidth resource allocation decisions and calculation resource allocation decisions which are met by the operator revenue model;
s1025: and under the condition of meeting the constraint conditions of the bandwidth resource allocation decision and the calculation resource allocation decision, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining a bandwidth resource allocation strategy b and a calculation resource allocation strategy fo according to the optimal solution.
In this disclosure, the operator benefit model is the same as in step S1021, and is represented as:
Figure BDA0002360055010000105
in step S1024, the constraint conditions of the bandwidth resource allocation decision and the computing resource allocation decision are expressed as:
Figure BDA0002360055010000111
Figure BDA0002360055010000112
Figure BDA0002360055010000113
Figure BDA0002360055010000114
Figure BDA0002360055010000115
wherein B represents the total bandwidth of M base stations; fmecRepresenting the total computational resources of the M base stations.
In step S1025, solving the optimal solution problem of the operator revenue model may be regarded as solving a geometric planning problem, which may be equivalently converted into a convex optimization problem form, and solved by an interior point method, which may specifically refer to the prior art, and this disclosure will not be described herein again.
In step S103, a resource allocation policy b and a computational resource allocation policy f are calculated based on the bandwidth resource allocation policy boAnd unloading decision lambda, respectively constructing a user preference list and a base station preference list, wherein the user preference list stores the corresponding relation of the mobile node connected with the computing node, and the base station preference list stores the corresponding relation of the computing node connected with the mobile node.
In the present disclosure, as shown in fig. 4, the bandwidth-based resource allocation policy b and the computing resource allocation policy f areoAnd an uninstall decision λ, the building of the user preference list comprising the following steps S1031-S1032:
s1031: based on bandwidth resource allocation strategy b and calculation resource allocation strategy foAnd calculating the preference value of the user equipment n to the base station m by the unloading decision lambda;
s1032: and arranging the base stations M in an ascending order according to the preference values to construct a user preference list.
In the present disclosure, step S1031 is to obtain the bandwidth-based resource allocation policy b and the calculation resource allocation policy foAnd calculating the preference value of the user equipment n to the base station m by using the unloading decision lambda, wherein the following formula is adopted:
Figure BDA0002360055010000116
wherein phi isN(n, m) represents a preference value of the user equipment n for the base station m.
In step S1032, the user preference list is sorted according to the ascending order of preference values, specifically, the preference list LN(n)=[1,…,m,…M]The corresponding preference values are ordered as phiM(n,1)≤…≤φM(n,m)≤…≤φM(n,M)。
In the present disclosure, with continued reference to fig. 4, the bandwidth-based resource allocation policy b and the computing resource allocation policy f are describedoAnd offloading the decision λ, the constructing of the base station preference list comprising the following steps S1033-S1034:
s1033: based on bandwidth resource allocation strategy b and calculation resource allocation strategy foCalculating the preference value of the base station m to the user equipment n by the unloading decision lambda;
s1034: and arranging the user equipment N according to the preference values in a descending order, and constructing a preference list of the base station.
In the present disclosure, step S1033 is to obtain the bandwidth-based resource allocation policy b and the computing resource allocation policy foAnd calculating the preference value of the base station m for the user equipment n by using the unloading decision lambda, wherein the following formula is adopted:
Figure BDA0002360055010000121
wherein phi isM(m, n) represents a preference value of the base station m for the user equipment n.
In step S1034, the preference lists of the base stations are arranged according to a descending order of preference values, specifically, the preference list LM(m)=[1,…,n,…N]The corresponding preference values are ordered as phiM(m,1)≥…≥φM(m,n)≥…≥φM(m,M)。
In step S104, when the mobile node establishes a communication connection, a computing node is selected from the user preference list to send a task offloading request, and after the computing node receives the task offloading request, when the connection number reaches an upper limit, if the mobile node matches the base station preference list, the mobile node that is not in the list is disconnected, and a communication connection with the mobile node is established.
In the present disclosure, the mobile node may send task offload requests to the computing node in sequence according to the ascending order of the base stations in the user preference list. The computing node may match the corresponding mobile node according to the order of the user equipments arranged in descending order in the preference list of the base station, and directly establish a communication connection when the number of connections of the base station does not reach the upper limit Q. And when the connection number of the base station reaches the upper limit Q, disconnecting the user equipment which is not in the list or disconnecting the user equipment which is sequenced according to the sequence of the user equipment in the preference list of the base station, and further accessing new user equipment, thereby improving the income of operators.
According to an embodiment of the present disclosure, the method further comprises: and if the base station preference list is not matched with the mobile node, updating a matching pair set formed by the mobile node and a plurality of surrounding computing nodes, and repeating the steps of constructing the user preference list and the base station preference list to establish communication connection with the mobile node.
In the method, for a mobile node which cannot be matched with any computing node, a matching pair set formed by the mobile node and a plurality of surrounding computing nodes is updated firstly, then the steps S102-S103 are executed, the updating of the user preference list and the base station preference list is completed, finally, the mobile node sends a task unloading request to the computing node according to the updated user preference list, the computing node establishes communication connection with the mobile node according to the updated base station preference list, the unmatched mobile node is finally matched with the corresponding computing node, the computing task unloading is carried out, and the income of an operator is improved.
Fig. 5 illustrates a complete flow diagram of a user association and edge computing offload method to improve operator utility in accordance with an embodiment of the disclosure.
As shown in fig. 5, the method for offloading user association and edge computing for improving utility of an operator according to the present disclosure first initializes parameters b and f when executingoλ and Φ, wherein the user equipment can equally allocate bandwidth resources and computational resources as b, foIs started. The ue may randomly take a value in the (0, 1) interval as an initial value of the offloading decision. User equipment selection of proximityAnd pairing the base stations with the associated distances as the initial values of the matching pair set phi. Setting two iterators T1 and T2 while initializing the parameters, and setting the maximum iteration number as T;
secondly, calculating the task unloading ratio of each user equipment according to the operator profit model established in the step S1021, and updating lambda;
then, according to the updated lambda, calculating the resource allocation strategy of each user equipment, and updating the broadband resource strategy b and the calculation resource strategy foIterator t1 plus 1;
then, the parameters λ, b, and f are determinedoWhether convergence or iteration times reach T, if not, continuously and repeatedly updating lambda, b and foIf yes, calculating preference function values of the user equipment and the base station respectively to obtain a user preference list and a base station preference list;
selecting unmatched user equipment, sending a task unloading request to a base station based on a user preference list, determining whether the connection number reaches an upper limit by the base station, if not, receiving the task unloading request, if so, screening all the user equipment requesting to establish connection according to the base station preference list, and updating the user preference list of the rejected user equipment;
judging whether all the user equipment are completely matched, if so, finishing the matching, updating phi, adding 1 to the iterator t2, if not, returning to the step of selecting the user equipment which is not matched, sending a task unloading request to the base station based on the user preference list, and continuing to execute the subsequent steps after all the user equipment are matched;
and finally, judging whether phi converges or the iteration number reaches T, if so, ending the whole process, otherwise, returning to the step of calculating the task unloading ratio of each user equipment and updating lambda, and repeatedly updating iterators T1 and T2 until the whole process ends.
Fig. 6 shows a schematic diagram of the number of iterations and the utility of the operator in the calculation offloading method of the present disclosure. As can be seen from the overall curve of fig. 6, convergence is achieved when the number of iterations is about 5 in the calculation offloading method of the present disclosure.
Fig. 7 and 8 respectively show operator utility comparison diagrams of the calculation unloading method of the present disclosure and two existing algorithms. In fig. 7 and 8, the comparison algorithm 1 is a mode of performing no user association decision optimization and adopting a determined pre-allocation of the base station according to the distance, and the comparison algorithm 2 is a mode of performing no resource allocation optimization and adopting a mode of equally dividing all bandwidth resources and computing resources for all users. As can be seen from the simulation results of fig. 7, the utility of the operator after the calculation offloading method of the present disclosure is significantly better than that of the comparative algorithms 1 and 2, and the advantage is more significant as the number of cells increases.
As can be seen from fig. 8, when the computing resources of the server are less than 1.5 x 109In time, the operator utility obtained by the disclosed computation offload method and existing algorithms both increases with the increase in computational resources. When the computing resources of the server continue to increase, the utility of the operator obtained by the computing offloading method and the comparison algorithm 1 of the present disclosure does not change much, because the processing delay of the edge server can be reduced when the computing resources increase to a certain extent, but the transmission delay cannot be offset, so that the offloaded data amount is not increased, and the utility change of the operator tends to be stable. The comparison algorithm 2 employs an average resource allocation, which wastes a large amount of computing resources, increases the cost of the operator, and causes a decrease in utility. Fig. 7 and 8 verify the importance of jointly optimizing resource allocation and user association to improve operator utility in edge-computing wireless networks.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present disclosure is not limited to the specific combination of the above-mentioned features, but also covers other embodiments formed by any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. The user association and edge computing offloading method for improving utility of an operator is characterized by comprising the following steps:
acquiring a matching pair set formed by a mobile node and a plurality of surrounding computing nodes;
determining a bandwidth resource allocation strategy b and a computing resource allocation strategy f based on the matching pair set, the bandwidth resources of the computing nodes and the computing resources°And an offload decision λ;
based on bandwidth resource allocation strategy b and calculation resource allocation strategy f°The method comprises the steps that an unloading decision lambda is established, a user preference list and a base station preference list are respectively established, wherein the user preference list stores the corresponding relation of a mobile node connected with a computing node, and the base station preference list stores the corresponding relation of the computing node connected with the mobile node;
and when the mobile node establishes communication connection, selecting a computing node from the user preference list to send a task unloading request, and after receiving the task unloading request, if the connection number reaches an upper limit, and the mobile node is matched with the base station preference list, disconnecting the mobile node which is not in the list, and establishing communication connection with the mobile node.
2. The method of claim 1, further comprising: and if the base station preference list is not matched with the mobile node, updating a matching pair set formed by the mobile node and a plurality of surrounding computing nodes, and repeating the steps of constructing the user preference list and the base station preference list to establish communication connection with the mobile node.
3. The method of claim 1, wherein determining an offload decision λ based on the set of matched pairs, bandwidth resources of the compute node, and computational resources comprises:
constructing an operator revenue model based on the matched pair set, the bandwidth resources of the computing nodes and the computing resources;
determining an offloading decision constraint satisfied by the operator revenue model;
and under the condition of meeting the unloading decision constraint condition, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining an unloading decision lambda according to the optimal solution.
4. The method of claim 3, wherein the operator benefit model is expressed as:
Figure FDA0002360053000000021
the offload decision constraint is expressed as:
(1)
Figure FDA0002360053000000022
(2)
Figure FDA0002360053000000023
(3)
Figure FDA0002360053000000024
wherein the content of the first and second substances,
Figure FDA0002360053000000025
representing a set of base stations with marginal computing power, M representing base stations,
Figure FDA0002360053000000026
representing a set of user equipments in the network that need to do task offloading, N representing user equipments,
Figure FDA0002360053000000027
represents a set of user equipments associated with base station m;
Figure FDA0002360053000000028
total revenue for the operator;
Nmrepresents the number of user equipments associated with base station m; mu denotes the price charged by the operator per bit of data, lambdan.mRepresenting the duty-off ratio, W, of n users in m base stationsnRepresenting the value weight of the user equipment n, which can be customized according to the operator's classification of different tasks, DnRepresenting the amount of data of a task to be processed by the user equipment n;
Figure FDA0002360053000000029
representing the cost of the operator, including bandwidth resource cost, computational resource cost and energy resource cost;
ν123weights representing three costs, respectively, bn,m
Figure FDA00023600530000000210
Respectively representing the allocated bandwidth resources and computational resources of user equipment n in base stations m, CnIndicating the number of CPU cycles, P, required to process a task of a 1-bit user equipment nmecRepresenting the power of the mobile edge compute server;
Figure FDA00023600530000000211
representing the locally calculated time delay of the user equipment,
Figure FDA00023600530000000212
representing the local computing power of the user equipment n;
Figure FDA00023600530000000213
an offload delay, R, representing the offloading of a task by a user equipment n to a base station mn,mIndicating the uplink transmission rate, R, at which the user equipment n transmits the task to the base station mn,m=bn,mlog2(1+pnhn,m2+In,m),pnRepresenting uplink transmission of user equipment nPower, hn,mRepresenting the gain of user equipment n to base station m,
Figure FDA00023600530000000214
representing the maximum tolerated delay of the user equipment n.
5. The method of claim 3, wherein determining a bandwidth resource allocation policy b and a computational resource allocation policy f is based on the set of matching pairs, bandwidth resources of the computational nodes, and computational resources°The method comprises the following steps:
determining constraint conditions of bandwidth resource allocation decisions and calculation resource allocation decisions which are met by the operator revenue model;
under the condition of meeting the constraint conditions of the bandwidth resource allocation decision and the calculation resource allocation decision, solving the optimal solution of the operator profit model by taking the maximization of the operator profit as a target, and determining a bandwidth resource allocation strategy b and a calculation resource allocation strategy f according to the optimal solution°
6. The method of claim 3, wherein the operator benefit model is expressed as:
Figure FDA0002360053000000031
the constraint conditions of the bandwidth resource allocation decision and the computing resource allocation decision are represented as follows:
(1)
Figure FDA0002360053000000032
(2)
Figure FDA0002360053000000033
(3)
Figure FDA0002360053000000034
(4)
Figure FDA0002360053000000035
(5)
Figure FDA0002360053000000036
wherein B represents the total bandwidth of M base stations; fmecRepresenting the total computational resources of the M base stations.
7. The method of claim 6, wherein the bandwidth-based resource allocation policy b and the computational resource allocation policy f are defined as°And an offload decision λ, constructing a user preference list, comprising:
based on bandwidth resource allocation strategy b and calculation resource allocation strategy f°And calculating the preference value of the user equipment n to the base station m by the unloading decision lambda;
and arranging the base stations M in an ascending order according to the preference values to construct a user preference list.
8. The method of claim 7, wherein the bandwidth-based resource allocation policy b and the computational resource allocation policy f are defined as°And calculating the preference value of the user equipment n to the base station m by using the unloading decision lambda, wherein the following formula is adopted:
Figure FDA0002360053000000037
wherein phi isN(n, m) represents a preference value of the user equipment n for the base station m.
9. The method of claim 6, wherein the bandwidth-based resource allocation policy b and the computational resource allocation policy f are defined as°And an offload decision λ, constructing a base station preference list, comprising:
based on bandwidth resource allocation strategy b and calculation resource allocation strategy f°Andcalculating the preference value of the base station m to the user equipment n by the unloading decision lambda;
and arranging the user equipment N according to the preference values in a descending order, and constructing a preference list of the base station.
10. The method of claim 9, wherein the bandwidth-based resource allocation policy b and the computational resource allocation policy f are defined as°And calculating the preference value of the base station m for the user equipment n by using the unloading decision lambda, wherein the following formula is adopted:
Figure FDA0002360053000000041
wherein phi isM(m, n) represents a preference value of the base station m for the user equipment n.
CN202010019094.2A 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator Active CN111182570B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010019094.2A CN111182570B (en) 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010019094.2A CN111182570B (en) 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator

Publications (2)

Publication Number Publication Date
CN111182570A true CN111182570A (en) 2020-05-19
CN111182570B CN111182570B (en) 2021-06-22

Family

ID=70652612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010019094.2A Active CN111182570B (en) 2020-01-08 2020-01-08 User association and edge computing unloading method for improving utility of operator

Country Status (1)

Country Link
CN (1) CN111182570B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111884829A (en) * 2020-06-19 2020-11-03 西安电子科技大学 Method for maximizing multi-unmanned aerial vehicle architecture income
CN112202847A (en) * 2020-09-14 2021-01-08 重庆邮电大学 Server resource allocation method based on mobile edge calculation
CN112491957A (en) * 2020-10-27 2021-03-12 西安交通大学 Distributed computing unloading method and system under edge network environment
CN113015217A (en) * 2021-02-07 2021-06-22 重庆邮电大学 Edge cloud cooperation low-cost online multifunctional business computing unloading method
CN113206876A (en) * 2021-04-28 2021-08-03 南京航空航天大学 Service redeployment method for dual-label perception in mobile edge computing environment
CN113377516A (en) * 2021-06-22 2021-09-10 华南理工大学 Centralized scheduling method and system for unloading vehicle tasks facing edge computing
CN113453216A (en) * 2021-06-16 2021-09-28 中国联合网络通信集团有限公司 Method and device for determining user terminal equipment
CN113608848A (en) * 2021-07-28 2021-11-05 西北大学 Cloud-edge cooperative edge computing task allocation method, system and storage medium
CN113660303A (en) * 2021-07-02 2021-11-16 山东师范大学 Task unloading method and system based on end side network cloud cooperation
CN113961266A (en) * 2021-10-14 2022-01-21 湘潭大学 Task unloading method based on bilateral matching under edge cloud cooperation
CN114375011A (en) * 2021-06-28 2022-04-19 山东华科信息技术有限公司 Matching theory-based power distribution Internet of things task unloading method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170079059A1 (en) * 2015-09-11 2017-03-16 Intel IP Corporation Slicing architecture for wireless communication
CN106534333A (en) * 2016-11-30 2017-03-22 北京邮电大学 Bidirectional selection computing unloading method based on MEC and MCC
CN107819840A (en) * 2017-10-31 2018-03-20 北京邮电大学 Distributed mobile edge calculations discharging method in the super-intensive network architecture
CN108600299A (en) * 2018-03-02 2018-09-28 中国科学院上海微系统与信息技术研究所 Calculating task discharging method and system between distributed multi-user
WO2018223445A1 (en) * 2017-06-07 2018-12-13 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system for jointly determining computational offloading and content prefetching in a cellular communication system
EP3457664A1 (en) * 2017-09-14 2019-03-20 Deutsche Telekom AG Method and system for finding a next edge cloud for a mobile user
CN109788069A (en) * 2019-02-27 2019-05-21 电子科技大学 Calculating discharging method based on mobile edge calculations in Internet of Things
CN109819046A (en) * 2019-02-26 2019-05-28 重庆邮电大学 A kind of Internet of Things virtual computing resource dispatching method based on edge cooperation
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
US20190342373A1 (en) * 2018-05-04 2019-11-07 Verizon Patent And Licensing Inc. Mobile edge computing
CN110505644A (en) * 2019-09-26 2019-11-26 江南大学 User task unloading and resource allocation joint optimization method under 5G super-intensive heterogeneous network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170079059A1 (en) * 2015-09-11 2017-03-16 Intel IP Corporation Slicing architecture for wireless communication
CN106534333A (en) * 2016-11-30 2017-03-22 北京邮电大学 Bidirectional selection computing unloading method based on MEC and MCC
WO2018223445A1 (en) * 2017-06-07 2018-12-13 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system for jointly determining computational offloading and content prefetching in a cellular communication system
EP3457664A1 (en) * 2017-09-14 2019-03-20 Deutsche Telekom AG Method and system for finding a next edge cloud for a mobile user
CN107819840A (en) * 2017-10-31 2018-03-20 北京邮电大学 Distributed mobile edge calculations discharging method in the super-intensive network architecture
CN108600299A (en) * 2018-03-02 2018-09-28 中国科学院上海微系统与信息技术研究所 Calculating task discharging method and system between distributed multi-user
US20190342373A1 (en) * 2018-05-04 2019-11-07 Verizon Patent And Licensing Inc. Mobile edge computing
CN109814951A (en) * 2019-01-22 2019-05-28 南京邮电大学 The combined optimization method of task unloading and resource allocation in mobile edge calculations network
CN109819046A (en) * 2019-02-26 2019-05-28 重庆邮电大学 A kind of Internet of Things virtual computing resource dispatching method based on edge cooperation
CN109788069A (en) * 2019-02-27 2019-05-21 电子科技大学 Calculating discharging method based on mobile edge calculations in Internet of Things
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN110505644A (en) * 2019-09-26 2019-11-26 江南大学 User task unloading and resource allocation joint optimization method under 5G super-intensive heterogeneous network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUNXIA SU 等: "Computation Offload with Online Matching Algorithm in Mobile Edge Computing Networks", 《2019 IEEE 90TH VEHICULAR TECHNOLOGY CONFERENCE(VTC2019-FALL)》 *
张海波 等: "基于移动边缘计算的V2X任务卸载方案", 《电子与信息学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111884829A (en) * 2020-06-19 2020-11-03 西安电子科技大学 Method for maximizing multi-unmanned aerial vehicle architecture income
CN112202847B (en) * 2020-09-14 2022-03-22 重庆邮电大学 Server resource allocation method based on mobile edge calculation
CN112202847A (en) * 2020-09-14 2021-01-08 重庆邮电大学 Server resource allocation method based on mobile edge calculation
CN112491957A (en) * 2020-10-27 2021-03-12 西安交通大学 Distributed computing unloading method and system under edge network environment
CN113015217A (en) * 2021-02-07 2021-06-22 重庆邮电大学 Edge cloud cooperation low-cost online multifunctional business computing unloading method
CN113206876A (en) * 2021-04-28 2021-08-03 南京航空航天大学 Service redeployment method for dual-label perception in mobile edge computing environment
CN113206876B (en) * 2021-04-28 2023-01-06 南京航空航天大学 Service redeployment method for dual-label perception in mobile edge computing environment
CN113453216A (en) * 2021-06-16 2021-09-28 中国联合网络通信集团有限公司 Method and device for determining user terminal equipment
CN113453216B (en) * 2021-06-16 2023-09-05 中国联合网络通信集团有限公司 Method and device for determining user terminal equipment
CN113377516A (en) * 2021-06-22 2021-09-10 华南理工大学 Centralized scheduling method and system for unloading vehicle tasks facing edge computing
CN114375010A (en) * 2021-06-28 2022-04-19 山东华科信息技术有限公司 Power distribution internet of things based on SDN and matching theory
CN114375011A (en) * 2021-06-28 2022-04-19 山东华科信息技术有限公司 Matching theory-based power distribution Internet of things task unloading method
CN114375011B (en) * 2021-06-28 2022-09-20 山东华科信息技术有限公司 Matching theory-based power distribution Internet of things task unloading method
CN113660303A (en) * 2021-07-02 2021-11-16 山东师范大学 Task unloading method and system based on end side network cloud cooperation
CN113660303B (en) * 2021-07-02 2024-03-22 山东师范大学 Task unloading method and system for end-edge network cloud cooperation
CN113608848A (en) * 2021-07-28 2021-11-05 西北大学 Cloud-edge cooperative edge computing task allocation method, system and storage medium
CN113608848B (en) * 2021-07-28 2024-02-27 西北大学 Cloud-edge cooperative edge computing task allocation method, system and storage medium
CN113961266A (en) * 2021-10-14 2022-01-21 湘潭大学 Task unloading method based on bilateral matching under edge cloud cooperation
CN113961266B (en) * 2021-10-14 2023-08-22 湘潭大学 Task unloading method based on bilateral matching under edge cloud cooperation

Also Published As

Publication number Publication date
CN111182570B (en) 2021-06-22

Similar Documents

Publication Publication Date Title
CN111182570B (en) User association and edge computing unloading method for improving utility of operator
CN108920279B (en) Mobile edge computing task unloading method under multi-user scene
CN109684075B (en) Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN109413724B (en) MEC-based task unloading and resource allocation scheme
CN109947545B (en) Task unloading and migration decision method based on user mobility
CN111586696B (en) Resource allocation and unloading decision method based on multi-agent architecture reinforcement learning
CN110087318B (en) Task unloading and resource allocation joint optimization method based on 5G mobile edge calculation
CN111586720B (en) Task unloading and resource allocation combined optimization method in multi-cell scene
CN110098969B (en) Fog computing task unloading method for Internet of things
CN111641973B (en) Load balancing method based on fog node cooperation in fog computing network
Asheralieva et al. Combining contract theory and Lyapunov optimization for content sharing with edge caching and device-to-device communications
CN111132191A (en) Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server
Zhao et al. Task proactive caching based computation offloading and resource allocation in mobile-edge computing systems
CN111200831B (en) Cellular network computing unloading method fusing mobile edge computing
CN111836284B (en) Energy consumption optimization calculation and unloading method and system based on mobile edge calculation
Zhang et al. DMRA: A decentralized resource allocation scheme for multi-SP mobile edge computing
Krolikowski et al. A decomposition framework for optimal edge-cache leasing
CN113918240A (en) Task unloading method and device
Krolikowski et al. Optimal cache leasing from a mobile network operator to a content provider
CN113992677A (en) MEC calculation unloading method for delay and energy consumption joint optimization
CN115396953A (en) Calculation unloading method based on improved particle swarm optimization algorithm in mobile edge calculation
Sun et al. A joint learning and game-theoretic approach to multi-dimensional resource management in fog radio access networks
CN112004265B (en) Social network resource allocation method based on SRM algorithm
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
CN111526526A (en) Task unloading method in mobile edge calculation based on service mashup

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant