CN112099932A - Optimal pricing method and system for soft-hard deadline task offloading in edge computing - Google Patents

Optimal pricing method and system for soft-hard deadline task offloading in edge computing Download PDF

Info

Publication number
CN112099932A
CN112099932A CN202010975628.9A CN202010975628A CN112099932A CN 112099932 A CN112099932 A CN 112099932A CN 202010975628 A CN202010975628 A CN 202010975628A CN 112099932 A CN112099932 A CN 112099932A
Authority
CN
China
Prior art keywords
tasks
deadline
edge
soft
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010975628.9A
Other languages
Chinese (zh)
Inventor
米顿
吕运容
孙志宏
郭棉
段志宏
张清华
李伟明
傅树霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Petrochemical Technology
Original Assignee
Guangdong University of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Petrochemical Technology filed Critical Guangdong University of Petrochemical Technology
Priority to CN202010975628.9A priority Critical patent/CN112099932A/en
Publication of CN112099932A publication Critical patent/CN112099932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0283Price estimation or determination

Abstract

The invention discloses an optimal pricing method and an optimal pricing system for soft-hard deadline task unloading in edge calculation, wherein the method comprises the following steps: designing an edge-cloud computing system model, and dividing tasks in the system into high-priority tasks and low-priority tasks; establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps of establishing a queue model for an edge server as an M | M |1 queue, and establishing a queue model for a cloud server as an M | M | ∞ queue; calculating the response time of the hard deadline task and the soft deadline task in the edge server and the cloud server; respectively calculating the cost of the edge server and the cost of the cloud server; respectively calculating the profits of the edge server and the cloud server; modeling the cost of the system using Wardrop; modeling the system revenue by adopting Nash balance; the efficiency of the pricing strategy is evaluated. The invention can find a pricing strategy for balancing cost minimization and income maximization.

Description

Optimal pricing method and system for soft-hard deadline task offloading in edge computing
Technical Field
The invention relates to an optimal pricing method and system for soft-hard deadline task unloading in edge calculation, and belongs to the technical field of edge calculation.
Background
The edge computing is originated in the field of media, and means that an open platform integrating network, computing, storage and application core capabilities is adopted on one side close to an object or a data source to provide nearest-end service nearby. By extending computing and storage resources to resource-constrained end users, edge computing greatly eliminates the transmission delay bottleneck between end users and remote cloud data centers. However, edge servers are greatly limited in terms of computing and resource storage compared to resource-rich cloud servers. Thus, task processing and computational allocation of resources has been a challenging problem in edge computing systems.
In edge computation, delay-sensitive tasks can be classified according to timeliness constraints: hard deadline tasks and soft deadline tasks. In a general purpose computing environment, many schedulers are proposed in succession. These schedulers have been developed based on the number of tasks missing an expiration date under various time and resource constraints. By taking the similarity and timeliness characteristics of tasks, how to schedule these tasks in a limited computing resource is an interesting research topic in edge computing.
One scheduling strategy is devised in reference 1, namely by selecting the position best suited for hard deadline and soft deadline tasks. Their hard deadline tasks require a very strict deadline to execute in the embedded system. The remaining computing resources of the edge nodes are used to perform soft deadline tasks, causing these tasks to suffer from maximum delays beyond their deadlines. While reference 1 lays a solid foundation for task offloading based on expiration dates, there is little interaction between edges and clouds. Furthermore, they do not take into account computational costs.
Reference 2 discusses the decentralized resource allocation of tasks according to their requirements and the computational model of the network, their main purpose being to reduce the overhead of the system. Reference 3 is studied from the point of view of pricing models and resource allocation of micro-economics theory, aiming at optimizing the limited resources of the terminal device task in edge computing within a limited budget.
To maximize the social benefit of the network, reference 4 suggests using a platform in a business model based on shared economics that can be used to manage network resources in real time. The platform maximizes profit without relying on any user decision as to where to place and process the task.
In summary, pricing models are mainly designed in the literature at present, and an optimal pricing mechanism is designed for a specific network scenario. These pricing models do not explicitly consider deadlines, especially hard and soft deadline tasks in edge calculations. Therefore, the following technical requirements in the prior art need to be solved: how to set prices for different deadline tasks maximizes edge cloud revenue by utilizing its own computing and transmission capabilities.
The references are as follows:
[1]N.Auluck,A.Azim,and K.Fizza,“Improving the schedulability of real-time tasks using fog computing,”IEEE Transactions on ServicesComputing,pp.1–14,Sept.2019.
[2]Q.He,G.Cui,X.Zhang,F.Chen,S.Deng,H.Jin,Y.Li,and Y.Yang,“A game-theoretical approach for user allocation in edge computing environment,”IEEE Trans.Parallel Distrib.Syst.,vol.31,no.3,pp.515–529,Mar.2020.
[3]J.Liu,S.Guo,K.Liu,and L.Feng,“Resource provision and allocation based on microeconomic theory in mobile edge computing,”IEEE Trans.Serv.Comput.,pp.1–14,June 2020,
[4]M.Siew,D.W.H.Cai,L.Li,and T.Q.S.Quek,“Dynamic pricing for resource-quota sharing in multi-access edge computing,”IEEE Trans.Netw.Sci.Eng.,pp.1–13,June 2020。
disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an optimal pricing method and system for soft-hard deadline task offloading in edge computing, so that an edge-cloud computing system can find a pricing strategy for balancing cost minimization and income maximization.
The invention specifically adopts the following technical scheme: the optimal pricing method for soft-hard deadline task unloading in edge calculation comprises the following steps:
step SS 1: designing an edge-cloud computing system model, and dividing tasks in the system into high-priority tasks and low-priority tasks;
step SS 2: establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps of establishing a queue model for an edge server as an M | M |1 queue, and establishing a queue model for a cloud server as an M | M | ∞ queue; calculating the response time of the hard deadline task and the soft deadline task in the edge server and the cloud server;
step SS 3: respectively calculating the cost of the edge server and the cost of the cloud server;
step SS 4: respectively calculating the profits of the edge server and the cloud server;
step SS 5: modeling the cost of the system using Wardrop;
step SS 6: modeling the system revenue by adopting Nash balance;
step SS 7: the efficiency of the pricing strategy is evaluated.
As a preferred embodiment, the division of the tasks in the system in step SS1 is based on the time limit of the tasks; high priority tasks typically require a hard deadline and low priority tasks require a soft deadline, so the tasks are also divided into hard and soft deadline tasks; let λ be the amount of tasks reached:
λ=λHS
wherein λ isHFor the number of hard deadline tasks, λSIs the number of soft deadline tasks.
As a preferred embodiment, step SS2 specifically includes:
the number of hard deadline tasks includes the number of hard deadline tasks in the edge server
Figure BDA0002685682770000041
And the number of hard deadline tasks in the cloud server
Figure BDA0002685682770000042
Similarly, the number of soft deadline tasks includes the number of soft deadline tasks in the edge server
Figure BDA0002685682770000043
And the number of soft deadline tasks in the cloud server
Figure BDA0002685682770000044
Then there are:
Figure BDA0002685682770000045
the response time of the hard deadline task on the edge server and the cloud server is respectively as follows:
Figure BDA0002685682770000046
the response time of the soft deadline task on the edge server and the cloud server is respectively as follows:
Figure BDA0002685682770000047
wherein, muEAnd muCThe service rates are the number of tasks completed by the server in a unit time.
As a preferred embodiment, step SS3 specifically includes:
the cost of hard deadline tasks in edge servers is:
Figure BDA0002685682770000048
the soft deadline task cost in the edge server is:
Figure BDA0002685682770000049
where α is a parameter used to balance the relationship between delay and cost, i.e., the longer the task processing time, the higher the total cost; pECost for each task on the edge server;
the costs of hard deadline tasks and soft deadline tasks in a cloud server are as follows:
Figure BDA0002685682770000051
Figure BDA0002685682770000052
wherein d istAverage transmission delay for transferring a single task from an edge server to a cloud server, PCFor the cost of each task on the cloud server.
As a preferred embodiment, step SS4 specifically includes:
the edge server gains are:
Figure BDA0002685682770000053
the yield of the cloud server is as follows:
Figure BDA0002685682770000054
Figure BDA0002685682770000055
for the number of hard deadline tasks in the edge server,
Figure BDA0002685682770000056
the number of hard deadline tasks in the cloud server;
Figure BDA0002685682770000057
for the number of soft deadline tasks in the edge server,
Figure BDA0002685682770000058
the number of soft deadline tasks in the cloud server.
As a preferred embodiment, step SS5 specifically includes:
the total cost of the soft and hard deadline tasks in the balance is equal, namely:
Figure BDA0002685682770000059
the proportion of the soft deadline tasks in the total tasks plays an important role in the overall cost of computing the total cost of the cloud server and the edge server, and is therefore written as follows:
Figure BDA00026856827700000510
where φ is the offset of the expiration date.
As a preferred embodiment, step SS6 specifically includes:
definition of
Figure BDA00026856827700000511
And
Figure BDA00026856827700000512
nash prices for edge servers and cloud servers, respectively, wherein
Figure BDA00026856827700000513
The definition is as follows:
Figure BDA00026856827700000514
in the same way, the method for preparing the composite material,
Figure BDA0002685682770000061
the definition is as follows:
Figure BDA0002685682770000062
wherein
Figure BDA0002685682770000063
γ=2(λ-2μE)2
Figure BDA0002685682770000064
Figure BDA0002685682770000065
As a preferred embodiment, step SS7 specifically includes:
in order to evaluate the efficiency of the pricing strategy, a price-of-annular strategy POA is introduced; a social welfare function S is defined,
Figure BDA0002685682770000066
wherein x and y are hard deadline and right deadline tasks, respectively;
minimizing the social welfare function S, the above formula is rewritten as follows:
Figure BDA0002685682770000067
Figure BDA0002685682770000068
wherein, T*Is the total delay experienced in the network in the equilibrium state.
As a preferred embodiment, step SS7 further includes: POA according to T*And SminAnd (3) calculating: POA ═ T8/Smin
The invention also provides an optimal pricing system for soft-hard deadline task offloading in edge computing, which comprises the following steps:
a task partitioning module to perform: designing an edge-cloud computing system model, and dividing tasks in the system into high-priority tasks and low-priority tasks;
a queue model generation module to perform: establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps of establishing a queue model for an edge server as an M | M |1 queue, and establishing a queue model for a cloud server as an M | M | ∞ queue; calculating the response time of the hard deadline task and the soft deadline task in the edge server and the cloud server;
a server cost calculation module to perform: respectively calculating the cost of the edge server and the cost of the cloud server;
a server revenue calculation module to perform: respectively calculating the profits of the edge server and the cloud server;
a system cost modeling module to perform: modeling the cost of the system using Wardrop;
a system revenue modeling module to perform: modeling the system revenue by adopting Nash balance;
an efficiency evaluation module to perform: the efficiency of the pricing strategy is evaluated.
The invention achieves the following beneficial effects: the invention provides an optimal pricing method for soft-hard deadline task unloading in edge calculation. The method comprises the steps of determining the balanced price of an edge server and a cloud server by designing a pricing model, firstly constructing an edge-cloud server system, dividing tasks of soft and hard deadlines, then establishing an edge server and cloud server queue model to determine the response time of the soft and hard deadlines, then designing cost and gain functions of the edge server and the cloud server, and finally modeling the cost and the gain of the system by adopting Nash balance and Wardrop balance, so that the system can find a pricing strategy for minimizing the balanced cost and maximizing the gain.
Drawings
FIG. 1 is a schematic diagram of an edge-cloud computing system model of the present invention;
FIG. 2 is a schematic diagram of the hard deadline task and the soft deadline task of the present invention;
fig. 3 is a schematic diagram of the soft and hard deadline tasks of the present invention being processed locally in the edge server or offloaded to the cloud server.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Example 1: as shown in fig. 1, fig. 2 and fig. 3, an edge-computed optimal pricing method for hard-soft deadline task offloading of the present invention comprises the following seven steps.
1) An edge-cloud computing system model is designed, as shown in fig. 1, and the tasks in the system are divided into high-priority and low-priority tasks.
The edge-cloud server system provided by the invention is assumed to be a time slot system and can be divided into a high-priority task and a low-priority task according to task priorities. And can be divided into hard deadline tasks and soft deadline tasks according to different time requirements of the tasks. Wherein the high priority tasks correspond to hard deadline tasks for ensuring correct operation of the system. Soft deadline tasks may tolerate longer delays than hard deadline tasks. As shown in fig. 2, the soft deadline task is one more offset in delay than the hard deadline task. Let λ be the amount of tasks that the system arrives at, which consists of hard deadline task amount and soft deadline task amount:
λ=λHS
wherein λ isHFor the number of hard deadline tasks, λSIs the number of soft deadline tasks.
2) And establishing a queue model, respectively establishing the queue model for the edge server and the cloud server, and calculating the response time of the hard and soft deadline task in the edge server and the cloud server respectively.
Without loss of generality, the invention models the edge server as an M | M |1 queue and the cloud server as an M | M | ∞ queue. Definition of muEAnd muCThe service rates are the number of tasks completed by the server in a unit time. The number of hard deadline tasks is determined by the number of hard deadline tasks in the edge server
Figure BDA0002685682770000081
And the number of hard deadline tasks in the cloud server
Figure BDA0002685682770000082
Similarly, the number of soft deadline tasks is determined by the number of soft deadline tasks in the edge server
Figure BDA0002685682770000083
And the number of soft deadline tasks in the cloud server
Figure BDA0002685682770000084
Figure BDA0002685682770000085
The response time of the hard deadline task on the edge server and the cloud server is respectively
Figure BDA0002685682770000086
And
Figure BDA0002685682770000087
Figure BDA0002685682770000088
the response time of the soft deadline task on the edge server and the cloud server is respectively
Figure BDA0002685682770000089
And
Figure BDA00026856827700000810
Figure BDA00026856827700000811
3) respectively calculating the cost of the edge server and the cloud server:
3.1 Soft and hard deadline task cost in edge servers
In order to better balance the relationship between delay and cost, the present invention introduces a parameter α, i.e. the longer the task processing time, the higher the total cost spent.
The total cost of the hard deadline task is calculated as follows:
Figure BDA0002685682770000091
wherein P isEThe cost required to process one task for the edge server.
The total cost of the soft deadline task is calculated as follows:
Figure BDA0002685682770000092
3.2 software and hardware deadline task cost in cloud Server
The total cost of the hard deadline task is calculated as follows:
Figure BDA0002685682770000093
the total cost of the soft deadline task is calculated as follows:
Figure BDA0002685682770000094
wherein d istAverage transmission delay for transferring a single task from an edge server to a cloud server, PCFor the cost of each task on the cloud server.
4) Revenue of edge server and cloud server:
edge server revenue UEThe definition is as follows:
Figure BDA0002685682770000095
cloud server profit UCThe definition is as follows:
Figure BDA0002685682770000096
5) the overall cost of the system is modeled with Wardrop balancing:
the total cost of soft and hard deadline tasks in Wardrop balancing is equal-that is
Figure BDA0002685682770000097
The proportion of the soft deadline tasks in the total tasks plays an important role in the overall cost of computing the total cost of the cloud server and the edge server, and can therefore be written as follows:
Figure BDA0002685682770000101
where φ is the offset of the expiration date.
6) Nash equilibrium is employed to model the overall revenue of the system:
nash equilibrium is used for modeling the total benefit of the system, and the system is strived to obtain the maximum benefit. We define
Figure BDA0002685682770000102
And
Figure BDA0002685682770000103
nash prices for edge servers and cloud servers, respectively, wherein
Figure BDA0002685682770000104
Is defined as follows
Figure BDA0002685682770000105
Wherein
Figure BDA0002685682770000106
γ=2(λ-2μE)2,
Figure BDA0002685682770000107
Figure BDA0002685682770000108
And
Figure BDA0002685682770000109
in the same way, the method for preparing the composite material,
Figure BDA00026856827700001010
the definition is as follows:
Figure BDA00026856827700001011
7) evaluating the efficiency of the pricing strategy:
in order to better evaluate the efficiency of the pricing strategy, the invention introduces a price-of-annuity (POA) strategy. A social welfare function S is defined,
Figure BDA00026856827700001012
where x and y are hard deadline and soft deadline tasks, respectively.
In order to minimize the social welfare function S, the above formula is rewritten as follows:
Figure BDA00026856827700001013
further, the total delay experienced in the network in the equilibrium state is defined as follows:
Figure BDA0002685682770000111
finally, the POA policy may be based on T*And SminAnd (3) calculating: POA ═ T*/Smin
Example 2: the invention also provides an optimal pricing system for soft-hard deadline task offloading in edge computing, which comprises the following steps:
a task partitioning module to perform: designing an edge-cloud computing system model, and dividing tasks in the system into high-priority tasks and low-priority tasks;
a queue model generation module to perform: establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps of establishing a queue model for an edge server as an M | M |1 queue, and establishing a queue model for a cloud server as an M | M | ∞ queue; calculating the response time of the hard deadline task and the soft deadline task in the edge server and the cloud server;
a server cost calculation module to perform: respectively calculating the cost of the edge server and the cost of the cloud server;
a server revenue calculation module to perform: respectively calculating the profits of the edge server and the cloud server;
a system cost modeling module to perform: modeling the cost of the system using Wardrop;
a system revenue modeling module to perform: modeling the system revenue by adopting Nash balance;
an efficiency evaluation module to perform: the efficiency of the pricing strategy is evaluated.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. The optimal pricing method for soft-hard deadline task offloading in edge computing is characterized by comprising the following steps of:
step SS 1: designing an edge-cloud computing system, wherein the edge-cloud computing system comprises an edge server and a cloud server, and tasks entering the system are divided into high-priority tasks and low-priority tasks according to cut-off;
step SS 2: establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps that a queue model is established for an edge server to be an M | M |1 queue, a queue model is established for a cloud server to be an M | M | ∞ queue, 1 represents that only 1 server exists, and infinity represents that an infinite server exists; calculating the response time of the hard deadline task and the soft deadline task in the edge server and the cloud server;
step SS 3: respectively calculating the cost of the edge server and the cost of the cloud server;
step SS 4: respectively calculating the profits of the edge server and the cloud server;
step SS 5: modeling the cost of the system using Wardrop;
step SS 6: modeling the system revenue by adopting Nash balance;
step SS 7: the efficiency of the pricing strategy is evaluated.
2. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 1, wherein the division of tasks in the system in step SS1 is based on time deadline of task; high priority tasks typically require a hard deadline and low priority tasks require a soft deadline, so the tasks are also divided into hard and soft deadline tasks; let λ be the amount of tasks reached:
λ=λHS
wherein λ isHFor the number of hard deadline tasks, λSIs the number of soft deadline tasks.
3. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 2, wherein the step SS2 specifically comprises:
the number of hard deadline tasks includes the number of hard deadline tasks in the edge server
Figure FDA0002685682760000011
And the number of hard deadline tasks in the cloud server
Figure FDA0002685682760000028
Similarly, the number of soft deadline tasks includes the number of soft deadline tasks in the edge server
Figure FDA0002685682760000029
And the number of soft deadline tasks in the cloud server
Figure FDA00026856827600000210
Then there are:
Figure FDA0002685682760000021
the response time of the hard deadline task on the edge server and the cloud server is respectively as follows:
Figure FDA0002685682760000022
the response time of the soft deadline task on the edge server and the cloud server is respectively as follows:
Figure FDA0002685682760000023
wherein, muEAnd muCThe normalized service rates of the edge server and the cloud server are respectively, that is, the number of tasks completed by the edge server and the cloud server in unit time.
4. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 3, wherein said step SS3 specifically comprises:
the cost of hard deadline tasks in edge servers is:
Figure FDA0002685682760000024
the soft deadline task cost in the edge server is:
Figure FDA0002685682760000025
where α is a parameter used to balance the relationship between delay and cost, i.e., the longer the task processing time, the higher the total cost; pECost for a single task on an edge server;
the costs of hard deadline tasks and soft deadline tasks in a cloud server are as follows:
Figure FDA0002685682760000026
Figure FDA0002685682760000027
wherein d istAverage transmission delay for transferring a single task from an edge server to a cloud server, PCThe cost of a single task on the cloud server.
5. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 4, wherein the step SS4 specifically comprises:
the edge server gains are:
Figure FDA0002685682760000031
the yield of the cloud server is as follows:
Figure FDA0002685682760000032
Figure FDA0002685682760000036
for the number of hard deadline tasks in the edge server,
Figure FDA0002685682760000037
the number of hard deadline tasks in the cloud server;
Figure FDA0002685682760000038
for the number of soft deadline tasks in the edge server,
Figure FDA0002685682760000039
the number of soft deadline tasks in the cloud server.
6. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 5, wherein the step SS5 specifically comprises:
the total cost of the soft and hard deadline tasks in the balance is equal, namely:
Figure FDA0002685682760000033
the role played by the proportion of the total tasks in the cloud server and edge server computing the total cost based on the soft deadline tasks is therefore written as follows:
Figure FDA0002685682760000034
wherein the content of the first and second substances,
Figure FDA00026856827600000310
representing the cost of the soft deadline task at the edge server,
Figure FDA00026856827600000311
representing the cost of the soft deadline task at the cloud server, alpha > 0 is a parameter for balancing the relation between the delay and the cost, and lambdaSDenotes the number of soft deadline tasks to arrive at the system, λ denotes the total number of tasks to arrive at the system, is the sum of the number of soft deadline tasks and the number of hard deadline tasks, μCRepresents the service rate, mu, of the cloud serverERepresents the service rate of the edge server and phi is the offset of the expiration date.
7. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 6, wherein said step SS6 specifically comprises:
definition of
Figure FDA00026856827600000312
And
Figure FDA00026856827600000313
nash prices for edge servers and cloud servers, respectively, wherein
Figure FDA00026856827600000314
The definition is as follows:
Figure FDA0002685682760000035
in the same way, the method for preparing the composite material,
Figure FDA0002685682760000045
the definition is as follows:
Figure FDA0002685682760000041
wherein
Figure FDA0002685682760000046
γ=2(λ-2μE)2
Figure FDA0002685682760000047
Figure FDA0002685682760000048
The meaning of k.
8. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 7, wherein the step SS7 specifically comprises:
in order to evaluate the efficiency of the pricing strategy, a price-of-annular strategy POA is introduced; a social welfare function S is defined,
Figure FDA0002685682760000042
wherein x and y are hard deadline and right deadline tasks, respectively;
minimizing the social welfare function S, the above formula is rewritten as follows:
Figure FDA0002685682760000043
Figure FDA0002685682760000044
wherein, T*The meaning of d is the total delay experienced in the network in equilibrium.
9. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 8, wherein the step SS7 further comprises: POA according to T*And SminAnd (3) calculating: POA ═ T*/Smin
10. An optimal pricing system for soft-hard deadline task offloading in edge computing, comprising:
a task partitioning module to perform: designing an edge-cloud computing system model, and dividing tasks in the system into high-priority tasks and low-priority tasks;
a queue model generation module to perform: establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps of establishing a queue model for an edge server as an M | M |1 queue, and establishing a queue model for a cloud server as an M | M | ∞ queue; calculating response time of the hard deadline task and the soft deadline task in the edge server and the cloud server;
a server cost calculation module to perform: respectively calculating the cost of the edge server and the cost of the cloud server;
a server revenue calculation module to perform: respectively calculating the profits of the edge server and the cloud server;
a system cost modeling module to perform: modeling the cost of the system using Wardrop;
a system revenue modeling module to perform: modeling the system revenue by adopting Nash balance;
an efficiency evaluation module to perform: the efficiency of the pricing strategy is evaluated.
CN202010975628.9A 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing Pending CN112099932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010975628.9A CN112099932A (en) 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010975628.9A CN112099932A (en) 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing

Publications (1)

Publication Number Publication Date
CN112099932A true CN112099932A (en) 2020-12-18

Family

ID=73759287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010975628.9A Pending CN112099932A (en) 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing

Country Status (1)

Country Link
CN (1) CN112099932A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238839A (en) * 2021-04-26 2021-08-10 深圳微品致远信息科技有限公司 Cloud computing based data management method and device
CN113791878A (en) * 2021-07-21 2021-12-14 南京大学 Distributed task unloading method for deadline perception in edge calculation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122249A (en) * 2017-05-10 2017-09-01 重庆邮电大学 A kind of task unloading decision-making technique based on edge cloud pricing mechanism
CN110351760A (en) * 2019-07-19 2019-10-18 重庆邮电大学 A kind of mobile edge calculations system dynamic task unloading and resource allocation methods
CN110888687A (en) * 2019-09-27 2020-03-17 华北水利水电大学 Mobile edge computing task unloading optimal contract design method based on contract design
CN110996393A (en) * 2019-12-12 2020-04-10 大连理工大学 Single-edge computing server and multi-user cooperative computing unloading and resource allocation method
CN111163521A (en) * 2020-01-16 2020-05-15 重庆邮电大学 Resource allocation method in distributed heterogeneous environment in mobile edge computing
CN111400001A (en) * 2020-03-09 2020-07-10 清华大学 Online computing task unloading scheduling method facing edge computing environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122249A (en) * 2017-05-10 2017-09-01 重庆邮电大学 A kind of task unloading decision-making technique based on edge cloud pricing mechanism
CN110351760A (en) * 2019-07-19 2019-10-18 重庆邮电大学 A kind of mobile edge calculations system dynamic task unloading and resource allocation methods
CN110888687A (en) * 2019-09-27 2020-03-17 华北水利水电大学 Mobile edge computing task unloading optimal contract design method based on contract design
CN110996393A (en) * 2019-12-12 2020-04-10 大连理工大学 Single-edge computing server and multi-user cooperative computing unloading and resource allocation method
CN111163521A (en) * 2020-01-16 2020-05-15 重庆邮电大学 Resource allocation method in distributed heterogeneous environment in mobile edge computing
CN111400001A (en) * 2020-03-09 2020-07-10 清华大学 Online computing task unloading scheduling method facing edge computing environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238839A (en) * 2021-04-26 2021-08-10 深圳微品致远信息科技有限公司 Cloud computing based data management method and device
CN113791878A (en) * 2021-07-21 2021-12-14 南京大学 Distributed task unloading method for deadline perception in edge calculation
CN113791878B (en) * 2021-07-21 2023-11-17 南京大学 Distributed task unloading method for perceiving expiration date in edge calculation

Similar Documents

Publication Publication Date Title
CN108009023B (en) Task scheduling method based on BP neural network time prediction in hybrid cloud
Van den Bossche et al. Online cost-efficient scheduling of deadline-constrained workloads on hybrid clouds
Salot A survey of various scheduling algorithm in cloud computing environment
CN104657220A (en) Model and method for scheduling for mixed cloud based on deadline and cost constraints
US20130007753A1 (en) Elastic scaling for cloud-hosted batch applications
CN109710392B (en) Heterogeneous resource scheduling method based on hybrid cloud
CN110308967B (en) Workflow cost-delay optimization task allocation method based on hybrid cloud
CN114610474B (en) Multi-strategy job scheduling method and system under heterogeneous supercomputing environment
CN106775932B (en) Real-time workflow scheduling method triggered by random event in cloud computing system
CN112099932A (en) Optimal pricing method and system for soft-hard deadline task offloading in edge computing
Shi et al. A budget and deadline aware scientific workflow resource provisioning and scheduling mechanism for cloud
CN115134371A (en) Scheduling method, system, equipment and medium containing edge network computing resources
Li et al. Endpoint-flexible coflow scheduling across geo-distributed datacenters
CN115543624A (en) Heterogeneous computing power arrangement scheduling method, system, equipment and storage medium
Monge et al. Autoscaling Scientific Workflows on the Cloud by Combining On-demand and Spot Instances.
CN109710372A (en) A kind of computation-intensive cloud workflow schedule method based on cat owl searching algorithm
CN106845746A (en) A kind of cloud Workflow Management System for supporting extensive example intensive applications
Saovapakhiran et al. Enhancing computing power by exploiting underutilized resources in the community cloud
Ye et al. SHWS: Stochastic hybrid workflows dynamic scheduling in cloud container services
CN106407007B (en) Cloud resource configuration optimization method for elastic analysis process
Singh et al. A comparative study of various scheduling algorithms in cloud computing
Kumar et al. Delay-based workflow scheduling for cost optimization in heterogeneous cloud system
Han et al. An adaptive scheduling algorithm for heterogeneous Hadoop systems
CN114727319A (en) 5G MEC calculation task unloading method based on VCG auction mechanism
CN107797870A (en) A kind of cloud computing data resource dispatching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination