CN110287024B - Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing - Google Patents

Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing Download PDF

Info

Publication number
CN110287024B
CN110287024B CN201910506190.7A CN201910506190A CN110287024B CN 110287024 B CN110287024 B CN 110287024B CN 201910506190 A CN201910506190 A CN 201910506190A CN 110287024 B CN110287024 B CN 110287024B
Authority
CN
China
Prior art keywords
server
user
task
tasks
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910506190.7A
Other languages
Chinese (zh)
Other versions
CN110287024A (en
Inventor
骆淑云
温雨舟
徐伟强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sci Tech University ZSTU
Original Assignee
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sci Tech University ZSTU filed Critical Zhejiang Sci Tech University ZSTU
Priority to CN201910506190.7A priority Critical patent/CN110287024B/en
Publication of CN110287024A publication Critical patent/CN110287024A/en
Application granted granted Critical
Publication of CN110287024B publication Critical patent/CN110287024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Abstract

The invention discloses a scheduling method for multiple servers and multiple users in industrial intelligent edge computing, which comprises the following steps: s1, the client selects the server with the maximum transmission rate to send a calculation unloading request; s2, the server selects a scheduling algorithm to schedule the received tasks and sends information of accepting or rejecting the tasks to the user; if yes, executing step S4, if no, executing step S3; s3, the user reduces the self-calculation unloading amount according to the server scheduling table, and repeatedly sends calculation unloading requests to the server until the calculation unloading requests are accepted by the server or the user autonomously stops the requests; s4, the server charges the user a calculation offload fee. The scheduling method of the invention is oriented to multi-server and multi-user application, and the unloaded tasks can meet the time delay requirement. The task scheduling method not only meets the requirements of user personal rationality and quotation authenticity, but also enables the calculation time saved by the user to be longest and the profit obtained by the server to be the largest under the limitation of server calculation resources.

Description

Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing
Technical Field
The invention relates to the technical field of edge computing, in particular to a scheduling method in edge computing, and particularly relates to a scheduling method for multiple servers and multiple users in industrial intelligent edge computing.
Background
Industrial internet of things (IIoT) is a subset of the internet of things (IoT) in industrial applications, and its use enables a large number of industrial devices (users) to jointly monitor and analyze industrial big data, thereby improving the production quality and efficiency of enterprises. However, due to the limitation of the computing power of the user, the user cannot process some tasks with higher computing performance requirements, such as fault prediction, image analysis, and the like. In addition, the data transmission to the data center with strong computing power is faced with the problems of too large time delay and privacy security due to long distance, and is not suitable for IIoT environment. Therefore, industrial intelligent edge computing deploys Mobile Edge Computing (MEC) servers with higher computing power near the edge of the network to provide a fee-based computing offload service for the customer. Since the MEC server itself has limited resources and most of them are provided by third parties, and the transmission rates are different from user to user, it is necessary to design a reasonable scheduling method so that the MEC server provides computation offload that satisfies the quality of service (QoS) of the user.
In the prior art, Sun Wen et al propose a resource allocation method based on bilateral auction for mobile edge computing of the industrial Internet of things, but do not consider the influence caused by the transmission rate between an MEC server and different users. Zhang Cheng et al designed a density-based offload policy for internet of things devices in the mobile edge system, but it assumed that the MEC server can accept all the tasks uploaded to it, but this assumption is difficult to satisfy due to limited computing resources of the MEC server. Li Longjiang et al designed a task arrival and load-aware computing offload model for vehicle moving edge computing networks that takes into account factors such as distance between the user and the server and server load, but did not design a reasonable incentive mechanism to make the server willing to provide computing offload. Zhang Tian et al propose a joint price model for the computation offload problem in edge computation, consider the server load and selfishness problems, but do not consider the application scenarios of multiple servers. Therefore, the methods can not be applied to application scenes which are oriented to multiple users and multiple servers, and have selfishness due to the limited resources of the servers.
In industrial intelligent edge computing application, most scenes are complex, a plurality of MEC servers and large-scale users are generally included, and due to the reasons of distance, bandwidth and the like, the transmission rates from the servers to the users are different, and the users are required to select proper MEC servers to obtain computing unloading services. The MEC server has limited computational resources and therefore cannot meet the offloading requirements of all the requested tasks while guaranteeing QoS.
Furthermore, as MEC servers are typically provided by third parties, and have selfishness, they are reluctant to provide computational offloading proactively. If the user does not provide reasonable computational cost, the server will refuse to provide computational offload and the industrial intelligent edge computing framework will not operate normally.
Therefore, the existing scheduling method has the following defects:
first, a multi-server multi-user oriented scheduling method does not consider the performance difference of a specific user for different server offloading tasks.
Secondly, the limitation caused by the limited computing resources of the MEC server is not considered, and all tasks cannot be accepted under the condition of meeting the QoS requirement of the user.
Third, the MEC server is not considered to be provided by a third party, has selfishness, and needs to design a reasonable incentive mechanism to enable the MEC server to provide the computation uninstalling service.
Fourth, without considering the authenticity of the user's offer, a reasonable mechanism needs to be designed so that it submits a true offer.
In view of this, in an application scenario of multiple servers and multiple users, how to consider performance differences of computation offload services provided by different servers, how to meet QoS requirements of users under the limitation of computing resources of the MEC server, and how to design a reasonable incentive mechanism so that the MEC server is willing to provide the computation offload services, and meanwhile, how to consider the authenticity of user offers to make the user submit a real offer, which are problems to be urgently solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a scheduling method facing multiple servers and multiple users in industrial intelligent edge computing, aiming at the defects of the prior art. The method enables the MEC server to provide calculation unloading service meeting the QoS for the user, enables the server to obtain the maximum profit, and reduces the calculation time under the condition that the user meets the personal rationality. The method is used for solving the problems of low practicability, poor stability and the like of the scheduling method in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a scheduling method for multiple servers and multiple users in industrial intelligent edge computing comprises the following steps:
s1, the client selects the server with the maximum transmission rate to send a calculation unloading request;
s2, the server selects a scheduling algorithm to schedule the received tasks and sends information of accepting or rejecting the tasks to the user; if yes, executing step S4, if no, executing step S3;
s3, the user reduces the self-calculation unloading amount according to the server scheduling table, and repeatedly sends calculation unloading requests to the server until the calculation unloading requests are accepted by the server or the user autonomously stops the requests;
s4, the server charges the user a calculation offload fee.
Further, the transmission rate is:
Figure BDA0002091916010000031
wherein i is a user, j is a server, wjFor server transmission bandwidth, powiFor user signal power, disijIs the distance between the user and the server, and decade is the attenuation constant of the signal power due to distance, nijIs the channel noise power.
Further, the computation offload request is:
[oi,di,bi,ui]
wherein o isiCalculated amount of unloading required for task, diMaximum delay for completion of tasks, biFor a user to pay a server for calculating an offloaded offer, u, with satisfaction of his personal rationalityiNumbering the user;
the initial value of the calculated unloading amount is all calculated amount of the task; the price for computing offload is a price that is less than the profit the user has in computing time saved by computing offload:
Figure BDA0002091916010000032
wherein e isiFor the benefit of the user, ciK is the conversion relation between unit time and currency and is determined by the requirements of upper-layer applications, and is the self computing capacity of the user.
Further, the specific steps of selecting the scheduling algorithm are as follows:
s21, the server determines a user number threshold T for sending a calculation unloading request;
s22, the server predicts the current user number, if the predicted number is larger than T, the maximum unit price algorithm is used, otherwise, the maximum total price algorithm is used.
Further, the prediction number is:
A=α·time+β·usershort+γ·userlong
wherein, time indicates the user number condition in the current time period, usershortT before the current time sliceshortAverage number of users per time slice, userlongT before the current time slicelongAverage value of the number of users per time slice, tshort<tlongAnd alpha, beta and gamma are weight coefficients.
Further, the scheduling algorithm comprises the following specific steps:
(1) the server deletes the completed tasks in the scheduling table and calculates d of the current tasknIs modified into
Figure BDA0002091916010000041
Wherein
Figure BDA0002091916010000042
To the task end time, dnThe maximum time delay of the task;
(2) delaying and scheduling the existing tasks in the server as much as possible;
(3) if the scheduling algorithm is the total price maximum algorithm, the server sends all the received tasks according to biSorting from big to small; if the scheduling algorithm is the maximum unit price algorithm, the server conforms all the received tasks to
Figure BDA0002091916010000043
Sorting from large to small, where cjIs the computing power of the server;
(4) the ordered tasks are sequentially arranged into a scheduling table from left to right and are scheduled with delay as much as possible, if the tasks can be arranged into the scheduling table at the moment, task accepting information is sent to a user, and if not, task rejecting information is sent to the user;
(5) for each accepted task taRemoving all tasks at the current moment, and then re-executing the scheduling algorithm of the steps (2) - (4) until a rejected task trAt this point it is accepted. If the scheduling algorithm is the total price maximum algorithm, paIs b isr(ii) a If the scheduling algorithm is the univalent maximum algorithm, paIs b isr·cj·oa/or. If there is no trIs accepted, then paIs b isa. Wherein p isaIs a task cost.
Further, the schedule table is:
Figure BDA0002091916010000044
wherein, n is the task,
Figure BDA0002091916010000045
for task start time, pnFor the cost of the task unNumbering the user of the task.
Further, the repeatedly sending the calculation unloading amount to the server by reducing the calculation unloading amount comprises:
the user sets the end time in the schedule according to the schedule of the selected serveriPrevious tasks are according to dnPlacing the tasks in advance as much as possible, and placing the rest tasks according to dnThe placement is delayed as much as possible, and then the unloading amount is continuously reduced according to a proper step length so that the server can schedule the task until the user payment price is less than 0 or the server is at diBefore full load;
the suitable step length is as follows:
Figure BDA0002091916010000051
where m is the maximum number of times the user is allowed to submit an offload request within a time slice determined by the server.
The invention provides a scheduling method for multiple servers and multiple users in industrial intelligent edge computing, which can maximize the profit of a server and reduce the computation unloading time of users under the condition of ensuring the QoS of the users; the invention selects the optimal server by calculating the distance between the user and the server and the transmission rate, thereby ensuring the QoS of the user; in the process of sending a calculation unloading request by a user, the user can submit the maximum time delay requirement of a task and ensure that a server provides a service meeting the QoS of the server; the invention also designs two scheduling algorithms which respectively correspond to the task scheduling algorithms adopted under two different scenes of more predicted users and less predicted users, thereby ensuring the maximum profit of the server; in addition, the invention designs an incentive mechanism which not only meets the personal rationality of the user, but also ensures the authenticity of the quotation, and ensures the rationality of the scheduling method.
Drawings
FIG. 1 is a flow chart illustrating a scheduling method of the present invention;
FIG. 2 is a diagram of a multi-server, multi-user scenario of the present invention.
Fig. 3, 4, 5, 6, 7, and 8 are scheduling representations intended when executing different scheduling tasks according to the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of each component in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In order to achieve the above objects and other related objects, the present invention provides a scheduling method for multiple servers and multiple users in industrial intelligent edge computing. The method enables the MEC server to provide calculation unloading service meeting the QoS for the user, enables the server to obtain the maximum profit, and reduces the calculation time under the condition that the user meets the personal rationality.
In the application of the existing industrial internet of things, the condition that a plurality of users and a plurality of servers form a calculation unloading service network is common. For example, various industrial devices collect device parameter setting information, product quality detection information, and the like through various sensors, and have limited computing capabilities, and a computing offloading service needs to be provided through the MEC server. Therefore, the invention is mainly based on multi-server and multi-user application, namely, a user selects the most appropriate server to perform calculation unloading, and the server can receive task unloading demand information submitted by a plurality of users at the same time.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
As shown in fig. 1, the present embodiment provides a scheduling method for multiple servers and multiple users in industrial intelligent edge computing, including:
s1, the client selects the server with the maximum transmission rate to send a calculation unloading request;
specifically, the specific step of selecting the server with the maximum transmission rate in step S1 is:
s11, the user side lists the server list which can be served;
s12, calculating the distance between each server and the user;
s13, calculating the link rate between each server and the user based on the distance;
and S14, selecting the server with the maximum link rate.
Specifically, the distance between the server and the user is:
Figure BDA0002091916010000061
wherein i is a user, j is a server, and (x)i,yi,zi)、(xj,yj,zj) Representing the location coordinates of the user and the server, respectively.
Specifically, the link rate between the server and the user is:
Figure BDA0002091916010000062
wherein i is a user, j is a server, wjFor server transmission bandwidth, powiFor user signal power, disijIs the distance between the user and the server, and decade is the attenuation constant of the signal power due to distance, nijIs the channel noise power.
Specifically, the computation offload request is:
[oi,di,bi,ui]
wherein o isiCalculated amount of unloading required for task, diMaximum delay for completion of tasks, biFor a user to pay a server for calculating an offloaded offer, u, with satisfaction of his personal rationalityiNumbering the user.
Specifically, the initial value of the calculation unloading amount is all the calculation amount of the task.
Specifically, the price of the calculation uninstall is a price less than the profit of the user due to the calculation time saved by the calculation uninstall:
Figure BDA0002091916010000071
wherein e isiIn order to be a benefit to the user,cik is the conversion relation between unit time and currency and is determined by the requirements of upper-layer applications, and is the self computing capacity of the user.
S2, the server selects a scheduling algorithm to schedule the received tasks and sends information of accepting or rejecting the tasks to the user; if yes, executing step S4, if no, executing step S3;
specifically, the specific step of selecting the scheduling algorithm in step S2 is:
s21, the server determines a user number threshold T for sending a calculation unloading request;
s22, the server predicts the current user number, if the predicted number is larger than T, the maximum unit price algorithm is used, otherwise, the maximum total price algorithm is used.
Specifically, the prediction number is:
A=α·time+β·usershort+γ·userlong
wherein, time indicates the user number condition in the current time period, usershortT before the current time sliceshortAverage number of users per time slice, userlongT before the current time slicelongAverage value of the number of users per time slice, tshort<tlongAnd alpha, beta and gamma are weight coefficients.
Specifically, the scheduling algorithm includes the specific steps of:
(1) the server deletes the completed tasks in the scheduling table and calculates d of the current tasknIs modified into
Figure BDA0002091916010000072
Wherein
Figure BDA0002091916010000073
To the task end time, dnThe maximum time delay of the task.
(2) Scheduling the existing tasks in the server is delayed as much as possible.
(3) If the scheduling algorithm is the total price maximum algorithm, the server sends all the received tasks according to biFrom large to largeSorting the data when the data is small; if the scheduling algorithm is the maximum unit price algorithm, the server conforms all the received tasks to
Figure BDA0002091916010000081
Sorting from large to small, where cjIs the computing power of the server.
(4) And sequentially arranging the sequenced tasks into a scheduling table from left to right and delaying the scheduling as much as possible, if the tasks can be arranged into the scheduling table at the moment, sending task accepting information to the user, and if not, sending task rejecting information to the user.
(5) For each accepted task taRemoving all tasks at the current moment, and then re-executing the scheduling algorithm of the steps (2) - (4) until a rejected task trIs received at this time. If the scheduling algorithm is the total price maximum algorithm, paIs b isr(ii) a If the scheduling algorithm is the maximum cost, paIs b isr·cj·oa/or. If there is no trIs accepted, then paIs b isa. Wherein p isaIs cost for the mission.
S3, the user reduces the self-calculation unloading amount according to the server scheduling table, and repeatedly sends calculation unloading requests to the server until the calculation unloading requests are accepted by the server or the user autonomously stops the requests;
specifically, the schedule table in step S3 is:
Figure BDA0002091916010000082
wherein, n is the task,
Figure BDA0002091916010000083
for task start time, pnFor the cost of the task unNumbering the user of the task.
Specifically, the repeatedly issuing of the calculation offload amount reduction request to the server in step S3 includes:
the user sets the end time in the schedule according to the schedule of the selected serveriPrevious tasks are according to dnPlacing the tasks in advance as much as possible, and placing the rest tasks according to dnThe placement is delayed as much as possible, and then the unloading amount is continuously reduced according to a proper step length so that the server can schedule the task until the user payment price is less than 0 or the server is at diBefore full load;
specifically, the suitable step size is:
Figure BDA0002091916010000084
where m is the maximum number of times the user is allowed to submit an offload request within a time slice determined by the server.
S4, the server charges the user a calculation offload fee.
The delay-aware scheduling method for multi-server and multi-user application in industrial intelligent edge computing according to the present invention is described with reference to the multi-server and multi-user task scene diagram shown in fig. 2.
Assume that there are 4 MEC servers in the scene, s respectively0、s1、s2、s3The coordinates are (1,1,0), (2,2,0), (3,3,0) and (1,1,1) respectively, and the transmission bandwidth w thereofjAre all 1. There are 4 users, u respectively0、u1、 u2、u3The coordinates are (1,0,0), (0,1,0), (0,0,1), (0,0,0), and the signal power powiAre all 10. Let the channel noise power nijBoth are 1, and the decay constant decay is 0.1.
In the current time slice, 4 users all have a computation offload task to submit to the server. Due to privacy security issues, the list of 4 users that can serve their servers is s0、s1、s2. The user calculates the distance to all servers in the list: dis00=1、dis01=2.24、dis02=3.61, dis10=1、dis11=2.24、dis12=3.61,dis20=1.73、dis21=3、dis22=4.36, dis30=1.41、dis31=2.83、dis324.24. The user then calculates the transmission rate to all servers in the list: c. C00=1、c01=1.69、c02=2.2,c10=1、c11=1.69、c12=2.2,c20=1.45、 c21=2、c22=2.42,c30=1.27、c31=1.94、c322.39. Based on the above calculations, the server s0The transmission rates with 4 users are all the largest, so 4 users all choose to send the computation offload request to the server s0
User u0、u1、u2、u3The computing power of (a) is: c. C0=5、c1=5、c2=1、c 32. The calculated unloading amount of the current time slice is respectively as follows: o0=1000、o1=3000、o2=1000、o32000. The maximum time delay for completing the task is respectively as follows: d0=20、d1=60、d2=50、d 360. The conversion relation k between the unit time and the currency is 1. Thus, the computational offload cost paid by the user to the server is: b0=170、b1=530、b2=940、b3=200。
From the above information, user u0、u1、u2、u3The submitted calculation unloading requests are respectively as follows: [1000,20,170,0]、[3000,60,530,1]、[1000,50,940,2]、[2000,60,200,3]。
The threshold value of the number of users sending the calculation unloading request currently set by the server is 3, and the number of users in the current time period is a flat peak: time is 0, usershort=3、userlongThe weight coefficients are respectively: α ═ 0.1, β ═ 0.5, and γ ═ 0.4. Therefore, the predicted amount a is 3.5, and the server selects the highest-cost scheduling algorithm.
Computing power of server cj100 and there are no tasks to be computed. ServiceThe unit price for each task was calculated as 17, 17.67, 94, 10 respectively. Therefore, the tasks are ordered as follows: [1000,50,940,2]、 [3000,60,530,1]、[1000,20,170,0]、[2000,60,200,3]。
First, a first task is scheduled, which can be enqueued into the schedule, so that the task is received and enqueued to the right of the schedule as far as possible, as shown in fig. 3, where the schedule is:
{[40,50,50,940,2]}。
next, a second task is scheduled, which may be enqueued on the schedule, so that the task is received and enqueued to the right of the schedule as far as possible, as shown in FIG. 4, where the schedule is:
{[10,40,40,530,1],[40,50,50,940,2]}。
next, a third task is scheduled, which may be enqueued on the schedule, so that the task is received and enqueued to the right of the schedule as far as possible, as shown in FIG. 5, when the schedule is:
{[0,10,20,170,0],[10,40,40,530,1],[40,50,50,940,2]}。
next, a fourth task is scheduled, which cannot be enqueued to the scheduler, and is therefore rejected.
The first accepted task [1000,50,940,2] is removed from the task list and the scheduling algorithm is executed again, where the task list is: [3000,60,530,1], [1000,20,170,0], [2000,60,200,3 ].
First, a first task is scheduled, which can be arranged into a schedule, and the task is the task which is scheduled to be received for the first time, and is arranged into the right side of the schedule as much as possible, as shown in fig. 6, the schedule is:
{[30,60,60,530,1]}。
next, a second task is scheduled, which can be put into the schedule, and which is the task that was accepted by the first schedule, and is put into the schedule as far to the right as possible, as shown in fig. 7, where the schedule is:
{[10,20,20,170,0],[30,60,60,530,1]}。
a third task is then scheduled, which can be put on the schedule, which is the task that was rejected from the first schedule, so that the cost of the task [1000,50,940,2] is modified to 100 at the price calculated for the unit price of the task [2000,60,200,3] and the pricing process for the task [1000,50,940,2] ends.
Pricing the second task and the third task according to the same pricing method to obtain that the price of the tasks [3000,60,530 and 1] is 300, and the price of the tasks [1000,20,170 and 0] is 170.
So far, all tasks are priced, and the scheduling table at this time is as follows:
{[0,10,20,170,0],[10,40,40,300,1],[40,50,50,100,2]}
rejected tasks [2000,60,200,3]]User u to which it belongs3According to the dispatch table of the server and the maximum number of times of submitting unloading requests for 2 times, the unloading amount is reduced to 1000, the quotation is adjusted to 100, and tasks [1000,60,100,3]]And submitting the data to a server.
The server schedules the task, which may be placed in the schedule, so that the task is received and placed as far to the right of the schedule as possible, as shown in fig. 8, where the schedule is:
{[0,10,20,170,0],[10,40,40,300,1],[40,50,50,100,2],[50,60,100,3]}。
the server prices tasks [1000,60,100,3] to get a price of 100, and the schedule at this time is:
{[0,10,20,170,0],[10,40,40,300,1],[40,50,50,100,2],[50,60,100,3]}
server to user u0、u1、u2、u3The calculated offload fees were charged 170, 300, 100, respectively.
At this point, no user submits the task for the time slice, and the scheduling is finished.
In conclusion, the invention provides a delay perception scheduling method for multi-server and multi-user application in industrial intelligent edge computing, which can maximize the benefit of a server and reduce the computation unloading time of a user under the condition of ensuring the QoS of the user; the invention selects the optimal server by calculating the distance between the user and the server and the transmission rate, thereby ensuring the QoS of the user; in the process of sending a calculation unloading request by a user, the user can submit the maximum time delay requirement of a task and ensure that a server provides a service meeting the QoS of the server; the invention also designs two scheduling algorithms which respectively correspond to different situations of more predicted users and less predicted users, thereby ensuring the maximum benefit of the server; in addition, the invention designs an incentive mechanism which not only meets the personal rationality of the user, but also ensures the truth of the price report, and ensures the rationality of the scheduling method.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by instructions associated with hardware via a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions without departing from the scope of the invention. Therefore, although the present invention has been described in more detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. A scheduling method for multiple servers and multiple users in industrial intelligent edge computing is characterized by comprising the following steps:
s1, the client selects the server with the maximum transmission rate to send a calculation unloading request;
s2, the server selects a scheduling algorithm to schedule the received tasks and sends information of accepting or rejecting the tasks to the user; if yes, executing step S4, if no, executing step S3;
the scheduling algorithm comprises the following specific steps:
(1) the server deletes the completed tasks in the scheduling table and calculates the current taskdnIs modified into
Figure FDA0003044436840000011
Wherein
Figure FDA0003044436840000012
To the task end time, dnThe maximum time delay of the task;
(2) delaying and scheduling the existing tasks in the server as much as possible;
(3) if the scheduling algorithm is the total price maximum algorithm, the server sends all the received tasks according to biSorting from big to small; if the scheduling algorithm is the maximum unit price algorithm, the server conforms all the received tasks to
Figure FDA0003044436840000013
Sorting from large to small, where cjIs the computing power of the server;
(4) the ordered tasks are sequentially arranged into a scheduling table from left to right and are scheduled with delay as much as possible, if the tasks can be arranged into the scheduling table at the moment, task accepting information is sent to a user, and if not, task rejecting information is sent to the user;
(5) for each accepted task taRemoving all tasks at the current moment, and then re-executing the scheduling algorithm of the steps (2) - (4) until a rejected task trIs accepted at this time, if the scheduling algorithm is the total price maximum algorithm, then paIs b isr(ii) a If the scheduling algorithm is the univalent maximum algorithm, paIs b isr·cj·oa/or(ii) a If there is no trIs accepted, then paIs b isa(ii) a Wherein p isaThe cost for the task;
s3, the user reduces the self-calculation unloading amount according to the server scheduling table, and repeatedly sends calculation unloading requests to the server until the calculation unloading requests are accepted by the server or the user autonomously stops the requests;
s4, the server charges the user a calculation offload fee.
2. The scheduling method of claim 1, wherein the transmission rate is:
Figure FDA0003044436840000014
wherein i is a user, j is a server, wjFor server transmission bandwidth, powiFor the user signal power, disijIs the distance between the user and the server, and decade is the attenuation constant of the signal power due to distance, nijIs the channel noise power.
3. The scheduling method of claim 1, wherein the computation offload request is:
[oi,di,bi,ui]
wherein o isiCalculated amount of unloading required for task, diMaximum delay for completion of tasks, biFor a user to pay a server for calculating an offloaded offer, u, with satisfaction of his personal rationalityiNumbering for the user; the initial value of the calculated unloading amount is all calculated amount of the task; the price quoted for the computational offload is less than the revenue price that the user would have incurred by the computational time saved by the computational offload:
Figure FDA0003044436840000021
wherein e isiFor the benefit of the user, ciK is the conversion relation between unit time and currency and is determined by the requirements of upper-layer applications, and is the self computing capacity of the user.
4. The scheduling method of claim 1, wherein the specific step of selecting the scheduling algorithm is:
s21, the server determines a user number threshold T for sending a calculation unloading request;
s22, the server predicts the current user number, if the predicted number is larger than T, the maximum unit price algorithm is used, otherwise, the maximum total price algorithm is used.
5. The scheduling method of claim 4, wherein the predicted number of specific steps in selecting a scheduling algorithm is:
A=α·time+β·usershort+γ·userlong
wherein, time indicates the user number condition in the current time period, usershortT before the current time sliceshortAverage number of users per time slice, userlongT before the current time slicelongAverage number of users per time slice, tshort<tlongAnd alpha, beta and gamma are weight coefficients.
6. The scheduling method of claim 5, wherein the schedule table is:
Figure FDA0003044436840000031
wherein, n is the task,
Figure FDA0003044436840000032
for task start time, pnFor the cost of the task unNumbering the users of the task.
7. The scheduling method of claim 1, wherein the reducing the computation offload amount and repeatedly issuing computation offload requests to the server are: the user sets the end time in the schedule according to the schedule of the selected serveriPrevious tasks are according to dnPlacing the tasks in advance as much as possible, and placing the rest tasks according to dnPut the server in delay as much as possible, and then continuously reduce the unloading amount according to the proper step length to enable the server toScheduling the task until the user pays less than 0 or the server is at diPreviously fully loaded.
8. The scheduling method of claim 7 wherein the appropriate step size for calculating an offload request is:
Figure FDA0003044436840000033
where m is the maximum number of times the user is allowed to submit an offload request within a time slice determined by the server.
CN201910506190.7A 2019-06-12 2019-06-12 Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing Active CN110287024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910506190.7A CN110287024B (en) 2019-06-12 2019-06-12 Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910506190.7A CN110287024B (en) 2019-06-12 2019-06-12 Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing

Publications (2)

Publication Number Publication Date
CN110287024A CN110287024A (en) 2019-09-27
CN110287024B true CN110287024B (en) 2021-09-28

Family

ID=68004062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910506190.7A Active CN110287024B (en) 2019-06-12 2019-06-12 Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing

Country Status (1)

Country Link
CN (1) CN110287024B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928599B (en) * 2019-11-06 2023-04-18 浙江理工大学 Task unloading method and system based on data flow in edge computing network
CN111107153B (en) * 2019-12-23 2022-02-18 国网冀北电力有限公司唐山供电公司 MEC pricing unloading method based on D2D communication in power Internet of things
CN112306696B (en) * 2020-11-26 2023-05-26 湖南大学 Energy-saving and efficient edge computing task unloading method and system
CN113282348B (en) * 2021-05-26 2022-09-16 浙江理工大学 Edge calculation task unloading system and method based on block chain

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920279A (en) * 2018-07-13 2018-11-30 哈尔滨工业大学 A kind of mobile edge calculations task discharging method under multi-user scene
CN108964817A (en) * 2018-08-20 2018-12-07 重庆邮电大学 A kind of unloading of heterogeneous network combined calculation and resource allocation methods
CN109756912A (en) * 2019-03-25 2019-05-14 重庆邮电大学 A kind of multiple base stations united task unloading of multi-user and resource allocation methods

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10440096B2 (en) * 2016-12-28 2019-10-08 Intel IP Corporation Application computation offloading for mobile edge computing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920279A (en) * 2018-07-13 2018-11-30 哈尔滨工业大学 A kind of mobile edge calculations task discharging method under multi-user scene
CN108964817A (en) * 2018-08-20 2018-12-07 重庆邮电大学 A kind of unloading of heterogeneous network combined calculation and resource allocation methods
CN109756912A (en) * 2019-03-25 2019-05-14 重庆邮电大学 A kind of multiple base stations united task unloading of multi-user and resource allocation methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Preformance guaranteed computation offloading for moblie-edge cloud computing;X.tao et.al;《IEEE Wireless Commun》;20171231;全文 *

Also Published As

Publication number Publication date
CN110287024A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110287024B (en) Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing
CN111163519B (en) Wireless body area network resource allocation and task offloading method with maximized system benefit
CN111163521B (en) Resource allocation method in distributed heterogeneous environment in mobile edge computing
CN110109745B (en) Task collaborative online scheduling method for edge computing environment
CN111010434B (en) Optimized task unloading method based on network delay and resource management
CN109656703B (en) Method for assisting vehicle task unloading through mobile edge calculation
CN108600014B (en) Stackelberg game-based storage resource allocation method
JP2011510564A (en) A practical model for high-speed file delivery services that supports delivery time guarantees and segmented service levels
CN109819047B (en) Mobile edge computing resource allocation method based on incentive mechanism
CN110111189B (en) Online combined resource allocation and payment method based on double-sided auction
Yolken et al. Game based capacity allocation for utility computing environments
CN110830390B (en) QoS driven mobile edge network resource allocation method
CN110888687A (en) Mobile edge computing task unloading optimal contract design method based on contract design
CN113377516B (en) Centralized scheduling method and system for unloading vehicle tasks facing edge computing
CN109740870B (en) Resource dynamic scheduling method for Web application in cloud computing environment
CN110647403A (en) Cloud computing resource allocation method in multi-user MEC system
CN111866601A (en) Cooperative game-based video code rate decision method in mobile marginal scene
CN113918240A (en) Task unloading method and device
CN109040193A (en) Based on without the mobile device cloud resource distribution method for relying on subtask
CN113961266B (en) Task unloading method based on bilateral matching under edge cloud cooperation
CN112559171B (en) Multi-user task unloading method based on delayed acceptance in mobile edge computing environment
Farooq et al. Adaptive and resilient revenue maximizing dynamic resource allocation and pricing for cloud-enabled IoT systems
CN111680860B (en) Deterministic cross online matching method in space-time crowdsourcing platform
CN114792167A (en) QLA-based production order full-process quality visual monitoring method and system
CN109673055B (en) Resource allocation method for joint communication and calculation based on two-dimensional region filling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant