WO2022257631A1 - 面向移动边缘计算的分布式专用保护业务调度方法 - Google Patents

面向移动边缘计算的分布式专用保护业务调度方法 Download PDF

Info

Publication number
WO2022257631A1
WO2022257631A1 PCT/CN2022/089422 CN2022089422W WO2022257631A1 WO 2022257631 A1 WO2022257631 A1 WO 2022257631A1 CN 2022089422 W CN2022089422 W CN 2022089422W WO 2022257631 A1 WO2022257631 A1 WO 2022257631A1
Authority
WO
WIPO (PCT)
Prior art keywords
service
sub
services
mec
protection
Prior art date
Application number
PCT/CN2022/089422
Other languages
English (en)
French (fr)
Inventor
李泳成
宗红梅
沈纲祥
林玠珉
Original Assignee
苏州大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州大学 filed Critical 苏州大学
Publication of WO2022257631A1 publication Critical patent/WO2022257631A1/zh
Priority to US18/195,591 priority Critical patent/US20230283527A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/62Establishing a time schedule for servicing the requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays

Definitions

  • the present invention relates to the technical field of mobile communication, in particular to a distributed dedicated protection service scheduling method oriented to mobile edge computing.
  • Mobile Edge computing (Mobile Edge Computing, MEC) can use the wireless access network to provide the services and cloud computing functions required by telecom users' IT nearby, thereby creating a carrier-class service environment with high performance, low latency and high bandwidth. Accelerate the download speed of various content, services and applications in the network, so that consumers can enjoy uninterrupted high-quality network experience.
  • MEC Mobile Edge Computing
  • more and more resource-intensive and delay-sensitive emerging applications such as face recognition, virtual/augmented reality, and online video send resource requests to the MEC server.
  • MEC-oriented distributed services are different from traditional distributed services. They often involve daily life production, and have very high requirements for latency and survivability. Research needs to be carried out in multiple aspects such as service segmentation, computing offloading, and service protection.
  • Existing researches either focus on edge caching, computing offloading, and resource allocation issues in mobile edge computing, or only focus on distributed service scheduling issues in traditional distributed systems, and there are few for MEC-oriented distributed dedicated protection service scheduling issues Research.
  • Most of the research is on the distributed business scheduling on the traditional distributed system. The purpose of this kind of business scheduling is to minimize the transmission delay and the total energy consumption of the system, and does not take into account the distributed business scheduling in the MEC network. problem, and did not take into account the joint optimization of business segmentation and computing resources and protection of computing resources.
  • the technical problem to be solved by the present invention is to overcome the deficiencies in the prior art and provide a distributed dedicated protection service scheduling method for mobile edge computing, which can minimize the total delay of network services under the premise of protecting services .
  • the present invention provides a distributed dedicated protection service scheduling method for mobile edge computing, including:
  • Use the double polling scheduling strategy to select the MEC server as the working server for the sub-service and select the MEC server as the working server for the protection sub-service, and the MEC servers selected for the sub-service and protection sub-service are different.
  • S2 Divide the service u to generate a sub-service list K u , and generate a protection sub-service list P u corresponding to the sub-service list K u ; sequentially select the MEC server from the set E u as the working server of each sub-service in the sub-service list K u ;
  • S4 Determine whether the sub-service deployed on the m-th MEC server establishes protection sub-services on other MEC servers, and if so, directly allocate a protection computing resource in a polling manner on the server corresponding to the deployment of the protection sub-service; If not, determine the protection server in a polling manner, and allocate protection computing resources;
  • S5 Determine whether the i computing resources on the working servers other than the m-th MEC server are all occupied, and if so, let i++ re-judge that the i computing resources on the working servers other than the m-th MEC server are occupied None are occupied; if not, the i computing resources on the working servers other than the mth MEC server are allocated to the sub-services in the business u;
  • S6 Determine whether the computing resources available on all working servers in time slot j can meet the computing resources and protection resource requirements of all sub-services in service u, if yes, execute S7; if not, set j++, return to execute S3 ;
  • the method used is the shortest route algorithm.
  • the polling method in S4 is a business scheduling strategy based on polling segmentation.
  • Step C1 For any service in a service set, determine M MEC servers closest to the current service according to the shortest route algorithm, and add the determined M servers to the available MEC server set E u ;
  • Step C2 For any business in the set of available MEC servers, judge whether the jth computing resource of the MEC server is free at time i, if it is free, allocate the computing resource to the business in step C1; if not, then Termination of distribution;
  • Step C3 Count the resources allocated by the deployment service on each MEC server in the set of available MEC servers as the size of the sub-service;
  • Step C4 Determine the business start time and end time, and calculate the completion time T u of all businesses.
  • it also includes constructing an integer linear programming optimization model, establishing an integer linear programming optimization model with the goal of minimizing service delay, and establishing the double polling scheduling strategy on the basis of the integer linear programming optimization model.
  • U is defined as a service set in the network
  • E is a set of MEC nodes in the network
  • K u is a sub-service set of service u
  • E u is the set of MEC nodes available for service u
  • TS is the set of available time slots
  • R u is the MEC computing resource required by service u, u ⁇ U
  • V m is the total MEC computing resource that can be provided on the MEC server m
  • is the preset maximum value
  • is a binary variable when the MEC server m is selected as the computing node of the sub-service k of the service u, the value is 1, otherwise it is 0
  • the MEC computing resources required by the kth sub-service of business u is a binary variable, when the MEC server m is selected as the computing node of the protection service of the sub-service k of the service u at time t, the value is 1, otherwise it is 0;
  • Protect computing resources for the required MEC of sub-service k of service u; is the computing resource provided by MEC server m for the protection service of sub-service k of service u at time t;
  • T max is an integer variable used to indicate the completion time of all services;
  • the constraints of the integer linear programming optimization model include business constraints, MEC server capacity constraints, delay constraints, and service protection constraints;
  • the business constraints include: the sum of the computing resources required by the sub-services is equal to the amount of resources required by the business, the amount of resources allocated by each server to the sub-services is equal to the amount of computing resources that the sub-services need to carry, and the sub-services must be deployed in processing on different servers;
  • the MEC server capacity constraints include: the sum of the computing resources used on each MEC server cannot exceed its maximum available computing resources;
  • the delay constraint includes: the total delay of completing the business cannot exceed the maximum number of time slots;
  • the service protection constraints include: the sum of computing resources required by the protected sub-services is equal to the sum of computing resources required by the protected sub-services, and the protected sub-services and corresponding protected sub-services are respectively deployed on different MEC servers.
  • MEC server m provides computing resources for sub-service k of service u, the server m is selected as the computing node of sub-service k;
  • the present invention also provides a distributed dedicated protection service scheduling system oriented to mobile edge computing, including a large-scale network composed of MEC servers, and the distributed dedicated protection service scheduling method oriented to mobile edge computing is used on the MEC server to arrange services and The dedicated protection service corresponding to the service.
  • the distributed dedicated protection service scheduling method oriented to mobile edge computing in the present invention considers the actual available computing resources on the server by using a double polling scheduling strategy, and minimizes the total service delay in the network under the premise of protecting services, It avoids the waste or overload of computing resources on the MEC server, and realizes the segmentation of distributed services for mobile edge computing, and the joint optimization of sub-service computing resources and protection computing resources.
  • FIG. 1 is a flow chart of the dual polling scheduling strategy in the present invention.
  • Fig. 2 is an illustration diagram of a random scheduling strategy in an embodiment of the present invention.
  • Fig. 3 is an illustrative diagram of a ring scheduling strategy in an embodiment of the present invention.
  • FIG. 4 is an illustration diagram of a dual polling scheduling strategy in an embodiment of the present invention.
  • Fig. 5 is a schematic structural diagram of the n6s9 test network of 6 MEC nodes and 9 network links used in the simulation experiment in the embodiment of the present invention.
  • Fig. 6 is a schematic structural diagram of the NSFNET test network with 14 MEC nodes and 21 network links used in the simulation experiment in the embodiment of the present invention.
  • Fig. 7 is a result diagram of comparing the total service delay of the network using the ILP model, random scheduling strategy, ring scheduling strategy, and double polling scheduling strategy in the environment of the n6s9 test network in the simulation experiment of the embodiment of the present invention.
  • Fig. 8 is a result diagram of comparing the total service delay of the network using the random scheduling strategy, the ring scheduling strategy, and the double polling scheduling strategy under the environment of the NSFNET test network in the simulation experiment of the embodiment of the present invention.
  • Figure 9 In the simulation experiment of the embodiment of the present invention, under the environment of the n6s9 test network, the total service delay of the network is calculated using the ILP model, random scheduling strategy, ring scheduling strategy, and double polling scheduling strategy as the number of segments increases. The result graph of the comparison.
  • Fig. 10 is the simulation experiment in the embodiment of the invention under the environment of the NSFNET test network, the result of comparing the total service delay of the network with the increase of the number of segments using the random scheduling strategy, the ring scheduling strategy, and the double polling scheduling strategy picture.
  • Fig. 11 is the result of comparing the actual slot occupancy of each MEC server in the network by using the random scheduling strategy, the ring scheduling strategy, and the double polling scheduling strategy in the environment of the n6s9 test network in the simulation experiment of the embodiment of the present invention picture.
  • Fig. 12 is the result of comparing the actual timeslot occupancy of each MEC server in the network using the random scheduling strategy, the ring scheduling strategy, and the double polling scheduling strategy in the environment of the NSFNET test network in the simulation experiment in the embodiment of the present invention picture.
  • Step 1 Obtain the list of services to be processed and the list of resources that can be deployed on the MEC server in the network, divide the service into multiple sub-services, and generate protection sub-services corresponding to the multiple sub-services.
  • the protection sub-service can also be understood as the backup of the sub-services.
  • the protection sub-service can be used to complete the task and ensure The business can be completed 100% smoothly.
  • Step 2 Use the double polling scheduling strategy to select the MEC server as the working server for the sub-service and select the MEC server as the working server for the protection sub-service, and the MEC servers selected for the sub-service and protection sub-service are different.
  • the main idea of the double-polling scheduling strategy is to select the protection server while selecting the working server for the sub-service in a round-robin manner.
  • the available MEC servers for this distributed service include A on the local node and B, C, and D on adjacent nodes.
  • the computing resource available to each MEC server in a unit time t is 4 units.
  • the MEC computing resource required by the distributed business u is 8 units, and the available servers are A, B, C and D. It can be seen from FIG. 4 that when the double polling scheduling strategy is adopted, the first working computing resource is deployed on the server C through polling, and then the protection computing resource is set for the working computing resource.
  • S2 Divide the service u to generate a sub-service list K u , and generate a protection sub-service list P u corresponding to the sub-service list K u ; sequentially select the MEC server from the set E u as the working server of each sub-service in the sub-service list K u .
  • S4 If the i-th computing resource on the m-th MEC server is occupied by the sub-service, further deploy the protection (backup) computing resource of the computing resource. Determine whether the sub-service deployed on the mth MEC server establishes protection sub-services on other MEC servers, if yes, allocate a protection computing resource in a round-robin manner on the server corresponding to the deployment of the protection sub-service; if not , the protection server is determined in a polling manner, and protection computing resources are allocated.
  • the polling method is a business scheduling strategy based on polling segmentation.
  • the specific process of the business scheduling strategy based on polling segmentation is:
  • Step C1 For any service in a service set, determine M MEC servers closest to the current service according to the shortest route algorithm, and add the determined M servers to the available MEC server set E u ;
  • Step C2 For any business in the set of available MEC servers, judge whether the jth computing resource of the MEC server is free at time i, if it is free, allocate the computing resource to the business in step C1; if not, then Termination of distribution;
  • Step C3 Count the resources allocated by the deployment service on each MEC server in the set of available MEC servers as the size of the sub-service;
  • Step C4 Determine the business start time and end time, and calculate the completion time T u of all businesses.
  • S5 Determine whether the i computing resources on the working servers other than the m-th MEC server are all occupied, and if so, let i++ re-judge that the i computing resources on the working servers other than the m-th MEC server are occupied None are all occupied; if not, the i computing resources on the working servers other than the mth MEC server are allocated to the sub-services in business u.
  • S6 Determine whether the computing resources available on all working servers in time slot j can meet the computing resources and protection resource requirements of all sub-services in service u, if yes, execute S7; if not, set j++, return to execute S3 .
  • An embodiment of a distributed dedicated protection service dispatching system oriented to mobile edge computing includes a large-scale network composed of MEC servers.
  • the protection service scheduling method arranges services and dedicated protection services corresponding to the services.
  • the distributed dedicated protection service scheduling method oriented to mobile edge computing in this embodiment also includes constructing an integer linear programming optimization model, and establishing an integer linear programming optimization model with the goal of minimizing service delay, based on the integer linear programming optimization model
  • the dual polling scheduling strategy is established.
  • the MEC-oriented distributed service scheduling problem is defined: a physical MEC network topology is known, and the topology includes MEC nodes and physical links.
  • MEC nodes include wireless access points and MEC servers connected to them.
  • Each MEC server provides a certain amount of MEC computing resources, and the network bandwidth resources provided by each physical link can ensure the deployment of services; Obtained by the Dijkstra shortest route algorithm; the optimization goal of this problem is: to minimize the total delay of service completion.
  • U is the set of services in the network
  • E is the set of MEC nodes in the network
  • K u is the set of sub-services of service u
  • E u is the set of MEC nodes available for service u
  • TS is the set of available time slots
  • R u is the MEC computing resource required by service u, u ⁇ U, V m is the total MEC computing resource that can be provided on the MEC server m.
  • is a preset maximum value, which is 1,000,000 in this embodiment;
  • variable is a binary variable, when the MEC server m is selected as the computing node of the sub-service k of the service u at time t, the value is 1, otherwise it is 0; is a binary variable, when the MEC server m is selected as the computing node of the sub-service k of the service u, the value is 1, otherwise it is 0; is an integer variable, indicating the computing resources provided by MEC server m for sub-service k of service u at time t; It is an integer variable.
  • the MEC computing resources required by the kth sub-service of business u is a binary variable, when the MEC server m is selected as the computing node of the protection service of the sub-service k of the service u at time t, the value is 1, otherwise it is 0;
  • Protect computing resources for the required MEC of sub-service k of service u; is the computing resource provided by MEC server m for the protection service of sub-service k of service u at time t;
  • T max is an integer variable used to indicate the completion time of all services;
  • the constraints of the integer linear programming optimization model include (1) business constraints, (2) MEC server capacity constraints, (3) delay constraints, and (4) business protective constraints;
  • the business constraints include: the sum of the computing resources required by the sub-services is equal to the amount of resources required by the business, the amount of resources allocated by each server to the sub-services is equal to the amount of computing resources that the sub-services need to carry, and the sub-services Must be deployed on different servers for processing;
  • MEC server m provides computing resources for sub-service k of service u, the server m is selected as the computing node of sub-service k;
  • the MEC server capacity constraints include: the sum of the computing resources used on each MEC server cannot exceed its maximum available computing resources;
  • the delay constraint includes: the total delay of completing the service cannot exceed the maximum number of time slots;
  • the service protection constraints include: the sum of the computing resources required for the protected sub-services is equal to the sum of the computing resources required for the protected sub-services, and the protected sub-services and corresponding protected sub-services are respectively deployed on different MEC servers;
  • the double-polling scheduling strategy (DS) in the present invention is compared with the random scheduling strategy (RS) and the ring scheduling strategy (CS).
  • the core idea of the random scheduling policy is to ensure that the protected service and the protected service cannot be scheduled to the same MEC server.
  • the existing distributed service A requires 370 units of MEC computing resources.
  • the sum of the computing resources required by each sub-service in Figure 2 is The working computing resources and protection computing resources of 370 are shown, and the available servers required by each sub-service are shown as N0, N1, N2, and N3 in FIG. 2 .
  • the protection business is scheduled, that is, the MEC server that is not connected to the protected business is randomly selected and protection computing resources are placed on it.
  • the specific steps when using the random scheduling policy to schedule the business of the MEC server are as follows:
  • Step A1 Obtain the distributed business and the MEC computing resources required by the distributed business, and complete the business segmentation and node selection through the business scheduling strategy based on round-robin segmentation.
  • the business scheduling policy process based on round robin segmentation is:
  • Step C1 For any service in a service set, according to the shortest route algorithm, determine M MEC servers closest to the current service, and add the determined M servers to the available MEC server set E u ;
  • Step C2 For any business in the set of available MEC servers, judge whether the jth computing resource of the MEC server is free at time i, if it is free, allocate the computing resource to the business in step C1; if not free, that is If the total resources required by the business have been reached, the allocation will be terminated;
  • Step C3 Count the resources allocated by the deployment service on each MEC server in the set of available MEC servers as the size of the sub-service;
  • Step C4 Determine the business start time and end time, and calculate the completion time T u of all businesses.
  • Step A2 Randomly select a MEC server that is not connected to the protected service and place protection computing resources to complete the selection of nodes for the protection sub-service.
  • Step A2.1 After the business segmentation is completed, generate a list of protected sub-services for each business;
  • Step A2.2 For each protection sub-service, remove the MEC server where the sub-service corresponding to the protection sub-service is located from the set of available MEC servers, and then randomly select the m-th MEC server from it, if the m-th MEC server is selected , then allocate the computing resource Pk u on the selected m-th MEC server for the protection sub-service, restore the set E u , and execute step A2.3;
  • the core idea of the ring scheduling strategy is to schedule the protection sub-services to MEC servers different from the protected sub-services in a sequential manner. As shown in the example in Figure 3, assuming that distributed service A requires 370 units of MEC computing resources, MEC servers N0, N1, N2, and N3 provide computing resources and protection resources for the four sub-services of service A respectively. After the distributed business scheduling strategy based on round-robin segmentation, the computing resources required by each sub-service and the MEC servers deployed respectively are shown in Figure 2.
  • the protection sub-service of each sub-service needs to be generated first, and then the protection sub-service needs to be dispatched to the MEC server for processing, that is, computing resources are deployed to the protection sub-service according to the ring order, that is, the protection service P0 of the sub-service u0 is deployed in On the server N1, the protection service P1 of the sub-service u1 is deployed on the server N2, and the respective protection services P2 and P3 of the sub-services u2 and u3 are respectively deployed on the servers N3 and N0.
  • the specific steps when using the ring scheduling policy to schedule the business of the MEC server are as follows:
  • Step B1 Obtain the distributed business and the MEC computing resources and protection resources required by the distributed business, and complete the business segmentation and node selection through the business scheduling strategy based on round-robin segmentation;
  • Step B2 Generate protection sub-services for each sub-service, and schedule the protection sub-services to be processed on MEC servers different from the protected sub-services in a sequential manner.
  • Step B2.1 After the business segmentation is completed, generate a protection sub-service list for each business;
  • Step B2.2 traverse the protection sub-service list, and judge whether the current protection sub-service p is the last protection sub-service, if not, select the p+1th MEC server to carry the protection sub-service; if yes, select the protection sub-service p
  • the business is deployed on the first MEC server in the list E u ;
  • Step B2.3 Allocate the protection computing resources required for the p-th protection sub-service on the selected MEC server
  • Step B2.4 Judging whether the i-th computing resource of the selected m-th MEC server is idle on time slot j, if it is idle, the business u occupies the computing resource, and makes the business u on the m-th MEC server
  • Step B2.5 Judging whether the computing resources allocated to the protection sub-service p have reached the required computing resources, if it has reached, then execute step B2.6; if not, then another j++, return to execute step B2.4;
  • Figure 5 shows the n6s9 network including 6 MEC nodes and 9 network links
  • Figure 6 shows the NSFNET network including 14 MEC nodes and 21 network links, where the links in Figure 5 and Figure 6
  • the numbers on the road indicate the physical length (km).
  • the maximum available computing resource of each MEC server in the network is 1000 units
  • the average computing resource required by each service is 400 units
  • the number of services generated on each MEC node is known
  • the total number of time slots is set to 200
  • the unit is t
  • the number of distributed services on each node is randomly generated within a certain range. It is evaluated from three perspectives: (1) the total delay of services in the network, (2) the impact of the number of splits on the total delay of distributed dedicated protection services, and (3) the load balancing of MEC servers.
  • RS random scheduling strategy
  • CS ring scheduling strategy
  • DS double polling scheduling strategy
  • the random scheduling strategy (RS), the ring scheduling strategy (CS), and the double polling scheduling strategy (DS) are used to compare the total service delay of the network, and the comparison results are shown in Figure 8 shown.
  • the values on the x-axis in Figures 7 and 8 represent the average traffic volume on each MEC node, and the values on the y-axis represent the total delay after all services in the network are completed.
  • the dual polling scheduling strategy (DS) has very similar performance to the corresponding ILP optimization model, which proves the efficiency of the dual polling scheduling strategy (DS).
  • the double polling scheduling strategy (DS) can effectively reduce the total time of business completion Extension, respectively, decreased by 20% and 14%.
  • the DS strategy allocates sub-service protection servers, it fully considers the amount of computing resources available to the server itself, and avoids the load imbalance between servers caused by two sub-services carrying the same service on one server, thereby avoiding service congestion and service totality. increase in latency.
  • the ILP model is only suitable for solving small-scale problems.
  • the time complexity of the ILP model will increase sharply. It is difficult to find the optimal solution for large-scale traffic within an effective time range, so the ILP model is not used in the NSFNET test network environment to represent the optimal situation under theoretical conditions, but it can be seen from Figure 8 that, compared with random scheduling Compared with the strategy (RS) and the ring scheduling strategy (CS), the total delay of the double polling scheduling strategy (DS) business completion is reduced by 50% and 29% respectively, which also proves that the double polling scheduling strategy (DS) is in the minimum Efficiency in terms of total service delay.
  • RS strategy
  • CS ring scheduling strategy
  • the random scheduling strategy (RS), the ring scheduling strategy (CS), and the double polling scheduling strategy (DS) are used to measure the total service delay of the network. Comparison, and using the ILP model to obtain the optimal value under theoretical conditions, the comparison results are shown in Figure 9. In the environment of the NSFNET test network, with the increase of the number of segments, the random scheduling strategy (RS), the ring scheduling strategy (CS), and the double polling scheduling strategy (DS) are used to measure the total network service delay. Comparison, the comparison result is shown in Figure 10.
  • the x-axis in Figure 9 and Figure 10 represents the number of splits, and the y-axis represents the total service delay; the number of services on each MEC node in the n6s9 network is randomly generated within the range of [20,25]; each MEC in the NSFNET network The number of services on the node is randomly generated in the range of [20,100].
  • Double Polling Scheduling Strategy can effectively reduce the total delay of service completion.
  • the service delay obtained by using the double polling scheduling strategy (DS) is reduced by 24% compared with the random scheduling strategy (RS), because the double polling scheduling strategy (DS) considers the MEC
  • RS Random Scheduling Strategy
  • CS Ring Scheduling Strategy
  • the number of services on each MEC node in the n6s9 network is randomly generated in the range of [10,50]. In the NSFNET network, each The number of services on each MEC node is randomly generated in the range of [20,100], and the number of splits is four.
  • Use the variance formula as a standard to measure the load balance between servers the smaller the value of variance S 2 , the more balanced the load among servers.
  • Variance S 2 [(x 1 -M) 2 +(x 2 -M) 2 +(x 3 -M) 2 +...+(x n -M) 2 ]/n; where M is the set of data The average value of , n is the number of data, and x n is the MEC server.
  • the results of the double-round scheduling strategy (DS) are always better than those of the random scheduling strategy (RS) and the ring scheduling strategy (CS). performance is better.
  • the dual polling scheduling strategy (DS) makes the load on each server more balanced than the random scheduling strategy (RS) and the ring scheduling strategy (CS), and the ring scheduling strategy (CS) achieves a more balanced load on each server than the random scheduling strategy (RS). This result is reasonable.
  • the random scheduling strategy (RS) considers the problem of service protection, it does not consider the actual load of the MEC server. The protection resources of multiple sub-services in a service may need to be deployed on the same available server.
  • the Ring Scheduling Strategy schedules the sub-services to different MEC servers respectively.
  • a server cannot finish processing the sub-services it carries on time, it can also be processed on the protection server. , to ensure the completion of the business.
  • the ring scheduling strategy considers the problem of service protection, it does not consider the actual load of the MEC server, which may lead to an excessive actual load on a certain MEC server, resulting in severe resource competition.
  • the Double Polling Scheduling Strategy DS not only considers the problem of business protection, ensuring that the business can be completed 100%, but also considers the load balancing between servers to avoid business congestion.
  • the distributed dedicated protection service scheduling method oriented to mobile edge computing in the present invention establishes an integer linear programming optimization model with the goal of minimizing the total service delay in the network, and establishes the inspiration for distributed dedicated protection services on this basis Scheduling strategy.
  • the computing resources actually available on the server the total delay of the business in the network is minimized under the premise of protecting the business; at the same time, the waste or overload of computing resources on the MEC server is avoided, and the distributed business oriented to mobile edge computing is realized. Joint optimization of segmentation, sub-service computing resources, and protection computing resources.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种面向移动边缘计算的分布式专用保护业务调度方法和系统,方法包括将业务划分为子业务并生成相对应的保护子业务;使用双轮询调度策略给子业务选择工作服务器的同时给保护子业务选择工作服务器,且子业务和保护子业务选择的服务器不相同。系统包括由MEC服务器组成的大规模网络,MEC服务器上面向移动边缘计算的分布式专用保护业务调度方法布置业务和业务对应的专用保护业务。本发明使用双轮询调度策略考虑服务器上可用的计算资源,在保护业务的前提下最大程度降低网络中业务总时延,避免MEC服务器上出现计算资源浪费或过载现象,实现面向移动边缘计算的分布式业务的切分、子业务计算资源和保护计算资源的联合优化。

Description

面向移动边缘计算的分布式专用保护业务调度方法 技术领域
本发明涉及移动通信技术领域,具体涉及一种面向移动边缘计算的分布式专用保护业务调度方法。
背景技术
面向移动边缘计算(Mobile Edge Computing,MEC)可利用无线接入网络就近提供电信用户IT所需的服务和云端计算功能,从而创造出一个具备高性能、低延迟与高带宽的电信级服务环境,加速网络中各项内容、服务及应用的下载速度,让消费者享有不间断的高质量网络体验。现阶段,越来越多诸如人脸识别、虚拟/增强现实、在线视频等资源密集和时延敏感型新兴应用向MEC服务器发送资源请求,为了避免出现大量并发业务导致服务器过载和时延大幅增加的情况,需要研究如何将此类资源密集型和时延敏感型的业务切分成多份资源需求较少的子业务、以分布式的方式部署到多个相邻MEC服务器上并行处理。
面向MEC的分布式业务与传统分布式业务不同,往往涉及到日常生活生产,时延和生存性要求非常高,需要在业务切分、计算卸载和业务保护等多方面展开研究。现有的研究要么集中于移动边缘计算的边缘缓存、计算卸载和资源分配问题,要么只关注传统分布式系统中的分布式业务调度问题,很少有针对面向MEC的分布式专用保护业务调度问题的研究。绝大部分研究的是在传统的分布式系统上进行的分布式业务调度,此类业务调度的目的是最小化传输时延和系统总能耗,并没有兼顾考虑MEC网络中的分布式业务调度问题,而且未考虑到业务的切分与计算资源和保护计算资源联合优化的问题。
发明内容
为此,本发明所要解决的技术问题在于克服现有技术中的不足,提供一种面向移动边缘计算的分布式专用保护业务调度方法,可以在保护业务的前提下最小化网络业务的总时延。
为解决上述技术问题,本发明提供了一种面向移动边缘计算的分布式专用保护业务调度方法,包括:
获取网络中MEC服务器上待处理业务列表和可调配资源列表,将业务划分为多个子业务,并生成与多个子业务相对应的保护子业务;
使用双轮询调度策略给子业务选择MEC服务器作为工作服务器的同时给保护子业务也选择MEC服务器作为工作服务器,并且子业务和保护子业务选择的MEC服务器不相同。
进一步地,所述使用双轮询调度策略的具体过程为:
S1:对业务集合中的任意一个业务u,确定与业务u最近的M个MEC服务器加入到可用服务器集合E u中;
S2:划分业务u生成子业务列表K u,生成子业务列表K u对应的保护子业务列表P u;从集合E u中依次选取MEC服务器作为子业务列表K u中的每个子业务的工作服务器;
S3:在第j个时隙上,判断第m个MEC服务器上的第i个计算资源是否空闲,如果空闲则将第m个MEC服务器上的第i个计算资源分配给业务u中的子业务;如果不空闲,则判断第m+1个服务器上的第i个计算资源是否空闲,直到找到空闲计算资源并分配给业务u中的子业务;
S4:判断部署在第m个MEC服务器上的子业务是否在其它MEC服务器上建立保护子业务,如果是,则直接在对应部署保护子业务的服务器上以轮询 的方式分配一个保护计算资源;如果否,则以轮询的方式确定保护服务器,并分配保护计算资源;
S5:判断第m个MEC服务器之外的工作服务器上的i个计算资源是否都被占用,如果是,则令i++,重新判断第m个MEC服务器之外的工作服务器上的i个计算资源有没有都被占用;如果否,则将第m个MEC服务器之外的工作服务器上的i个计算资源分配给业务u中的子业务;
S6:判断在时隙j内所有工作服务器上可用的计算资源是否能满足业务u中所有子业务的计算资源和保护资源需求,如果是,则执行S7;如果否,则令j++,返回执行S3;
S7:停止分配,完成所有业务的计算和保护,此时业务u的时延T u为j,得到整个网络业务的完成时间T max=max{T u}。
进一步地,所述S1中确定与业务u最近的M个MEC服务器时,使用的方法为最短路由算法。
进一步地,所述S4中轮询的方式为基于轮询切分的业务调度策略。
进一步地,所述基于轮询切分的业务调度策略的具体过程为:
步骤C1:对一个业务集合中的任意一个业务,根据最短路由算法确定与当前业务最近的M个MEC服务器,将确定的M个服务器加入到可用MEC服务器集合E u中;
步骤C2:对可用MEC服务器集合中的任意一个业务,判断i时刻下MEC服务器的第j个计算资源是否空闲,如果空闲,则将该计算资源分配给步骤C1中的业务;如果不空闲,则终止分配;
步骤C3:统计可用MEC服务器集合中每个MEC服务器上部署业务分配的资源作为子业务的大小;
步骤C4:确定业务开始时间与结束时间,计算所有业务的完成时间T u
进一步地,还包括构建整数线性规划优化模型,以最小化业务时延的目标建立整数线性规划优化模型,在整数线性规划优化模型的基础上建立所述双轮询调度策略。
进一步地,所述以最小化业务时延的目标建立整数线性规划优化模型时,定义U为网络中的业务集合,E为网络中的MEC节点集合,K u为业务u的子业务集合,E u为业务u可用的MEC节点集合,TS为可用的时隙集合;R u为业务u所需的MEC计算资源,u∈U,V m为MEC服务器m上所能提供的总的MEC计算资源,△为预设的极大值;
Figure PCTCN2022089422-appb-000001
为二进制变量,当MEC服务器m在时刻t被选为业务u的子业务k的计算节点时取值为1,否则为0;
Figure PCTCN2022089422-appb-000002
为二进制变量,当MEC服务器m被选为业务u的子业务k的计算节点时取值为1,否则为0;
Figure PCTCN2022089422-appb-000003
为整型变量,表示MEC服务器m在时刻t处为业务u的子业务k的提供的计算资源;
Figure PCTCN2022089422-appb-000004
为整型变量,完成切分后,业务u第k个子业务所需的MEC计算资源;
Figure PCTCN2022089422-appb-000005
为二进制变量,当MEC服务器m在时刻t被选为业务u的子业务k的保护业务的计算节点时取值为1,否则为0;
Figure PCTCN2022089422-appb-000006
为业务u的子业务k的所需的MEC保护计算资源;
Figure PCTCN2022089422-appb-000007
为MEC服务器m在时刻t为业务u的子业务k的保护业务提供的计算资源;T max为整型变量,用于表示所有业务的完成时间;
得到优化目标最小化业务时延为minimize:T max
进一步地,所述以最小化业务时延的目标建立整数线性规划优化模型时,整数线性规划优化模型的约束条件包括业务约束、MEC服务器容量约束、时延约束和业务保护约束;
所述业务约束包括:子业务所需的计算资源之和等于业务所需的资源量,每个服务器分配给其上子业务的资源量等于子业务需要承载的计算资源量,子业务必须部署于不同的服务器上进行处理;
所述MEC服务器容量约束包括:每个MEC服务器上的使用的计算资源总和不能超过其最大可用计算资源量;
所述时延约束包括:完成业务的总时延不能超过最大时隙数;
所述业务保护约束包括:被保护子业务所需的计算资源总和等于保护子业务所需计算资源总和,被保护子业务与对应的保护子业务分别部署在不同的MEC服务器上。
进一步地,所述业务约束的表达式为:
Figure PCTCN2022089422-appb-000008
表示业务u的每个子业务k只能部署于一个MEC服务器上;
Figure PCTCN2022089422-appb-000009
表示一个MEC服务器不能同时服务业务u的任意两个子业务;
Figure PCTCN2022089422-appb-000010
表示业务u的任意两个子业务都必须部署于不同的服务器上进行处理;
Figure PCTCN2022089422-appb-000011
表示当MEC服务器m为业务u的子业务k提供计算资源后,该服务器m被选为子业务k的计算节点;
Figure PCTCN2022089422-appb-000012
表示服务器m给业务u的子业务k提供的计算资源总和等于子业务k所需的计算资源量,所有子业务k的计算资源量等于业务u的计算资源需求量;
所述MEC服务器容量约束的表达式为:
Figure PCTCN2022089422-appb-000013
表示在任意时刻t处,MEC提供给子业务的计算资源和保护计算资之和不能超过其自身可用计算资源的最大值;
所述时延约束的表达式为:
Figure PCTCN2022089422-appb-000014
表示计算所有业务处理完的时间,该时间不能小于MEC网络中任意业务的结束时间;
Figure PCTCN2022089422-appb-000015
表示计算所有业务处理完的时间,所有业务处理完的时间不小于MEC网络中任意业务的完成时间,包括业务保护的时间;
所述业务保护约束的表达式为:
Figure PCTCN2022089422-appb-000016
Figure PCTCN2022089422-appb-000017
表示所有的保护业务不能和被保护业务调度到同一个服务器上,每个保护子业务只能由同一个MEC服务器给它提供计算资源;
Figure PCTCN2022089422-appb-000018
Figure PCTCN2022089422-appb-000019
表示保护子业务和被保护子业务的切分形态完全一致。
本发明还提供一种面向移动边缘计算的分布式专用保护业务调度系统,包括由MEC服务器组成的大规模网络,所述MEC服务器上使用面向移动边缘计算的分布式专用保护业务调度方法布置业务和业务对应的专用保护业务。
本发明的上述技术方案相比现有技术具有以下优点:
本发明所述的面向移动边缘计算的分布式专用保护业务调度方法,通过使用双轮询调度策略考虑服务器上实际可用的计算资源,在保护业务的前提下最大程度降低网络中业务总时延,避免了MEC服务器上出现计算资源浪费或过载现象,实现了面向移动边缘计算的分布式业务的切分、子业务计算资源和保护计算资源的联合优化。
附图说明
为了使本发明的内容更容易被清楚的理解,下面根据本发明的具体实施例并结合附图,对本发明作进一步详细的说明。
图1是本发明中双轮询调度策略的流程图。
图2是本发明实施例中随机调度策略的举例说明图。
图3是本发明实施例中环调度策略的举例说明图。
图4是本发明实施例中双轮询调度策略的举例说明图。
图5是本发明实施例中仿真实验使用的6个MEC节点和9条网络链路的n6s9测试网络的结构示意图。
图6是本发明实施例中仿真实验使用的14个MEC节点和21条网络链路的NSFNET测试网络的结构示意图。
图7是本发明实施例中仿真实验中在n6s9测试网络的环境下,使用ILP模型、随机调度策略、环调度策略、双轮询调度策略对网络的业务总时延进行比较的结果图。
图8是本发明实施例中仿真实验中在NSFNET测试网络的环境下,使用随机调度策略、环调度策略、双轮询调度策略对网络的业务总时延进行比较的结果图。
图9本发明实施例中仿真实验中在n6s9测试网络的环境下,随着切分数量的增加使用ILP模型、随机调度策略、环调度策略、双轮询调度策略对网络的业务总时延进行比较的结果图。
图10是发明实施例中仿真实验中在NSFNET测试网络的环境下,随着切分数量的增加使用随机调度策略、环调度策略、双轮询调度策略对网络的业务总时延进行比较的结果图。
图11是本发明实施例中仿真实验中在n6s9测试网络的环境下,使用随机 调度策略、环调度策略、双轮询调度策略对网络中每个MEC服务器上实际时隙占用情况进行比较的结果图。
图12是本发明实施例中仿真实验中在NSFNET测试网络的环境下,使用随机调度策略、环调度策略、双轮询调度策略对网络中每个MEC服务器上实际时隙占用情况进行比较的结果图。
具体实施方式
下面结合附图和具体实施例对本发明作进一步说明,以使本领域的技术人员可以更好地理解本发明并能予以实施,但所举实施例不作为对本发明的限定。
在本发明的描述中,需要理解的是,术语“包括”意图在于覆盖不排他的包含,例如包含了一系列步骤或单元的过程、方法、系统、产品或设备,没有限定于已列出的步骤或单元而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
本发明中一种面向移动边缘计算的分布式专用保护业务调度方法的实施例,包括:
步骤一:获取网络中MEC服务器上待处理业务列表和可调配资源列表,将业务划分为多个子业务,并生成与多个子业务相对应的保护子业务。
将业务划分为多个子业务,并生成与多个子业务相对应的保护子业务;保护子业务也可以理解为子业务的备份,在原子业务发生异常时,可以使用保护子业务来完成任务,保证业务能100%顺利完成。
步骤二:使用双轮询调度策略给子业务选择MEC服务器作为工作服务器的同时给保护子业务也选择MEC服务器作为工作服务器,并且子业务和保护子业务选择的MEC服务器不相同。
双轮询调度策略的主要思想是以轮询的方式在给子业务选择工作服务器的 同时选择保护服务器。如图4中的例子所示,假设一个MEC节点上有一个分布式业务u,该分布式业务的可用MEC服务器包括本地节点的A和相邻节点上的B、C和D。每个MEC服务器在一个单位时间t内可用的计算资源是4units。假设分布式业务u需要的MEC计算资源是8units,可用的服务器是A、B、C和D。由图4可以看出,当采用双轮询调度策略时,首先通过轮询在服务器C上部署了第一个工作计算资源,然后为该工作计算资源设置保护计算资源。如果该业务在MEC服务器C上的子业务尚未在其它MEC服务器上建立保护子业务,则通过轮询的方式,在除C之外的其它MEC服务器上,通过轮询的方式部署相应的保护计算资源,并建立对应的保护子业务,即在图中MEC服务器D上部署相应的保护计算资源。以后如果有工作计算资源部署在MEC服务器C上,则需要在MEC服务器D上部署相应的保护计算资源。类似地,在MEC服务器A上部署了第二个工作计算资源后,又在MEC服务器C上部署其相应的保护计算资源,并建立对应的保护子业。按照上述方式依次轮询,直到在服务器上部署工作计算资源和保护计算资源都满足业务u所需的计算资源总量,则停止部署。此时,任意一个MEC服务器发生故障都不会对业务u的完成造成任何影响。
如图1双轮询调度策略的流程图所示,使用双轮询调度策略的具体过程为:
S1:对业务集合中的任意一个业务u,使用最短路由算法确定与业务u最近的M个MEC服务器加入到可用服务器集合E u中。
S2:划分业务u生成子业务列表K u,生成子业务列表K u对应的保护子业务列表P u;从集合E u中依次选取MEC服务器作为子业务列表K u中的每个子业务的工作服务器。
S3:在第j个时隙上,判断第m个MEC服务器上的第i个计算资源是否空闲,如果空闲则将第m个MEC服务器上的第i个计算资源分配给业务u中的子业务;如果不空闲,则判断第m+1个服务器上的第i个计算资源是否空闲, 直到找到空闲计算资源并分配给业务u中的子业务。
判断第m个MEC服务器上的第i个计算资源是否空闲,具体为:若第i个计算资源既没有已分配给子业务,也没有分配给保护子业务,则第i个计算资源空闲;若第i个计算资源分配给子业务或者分配给保护子业务,则第i个计算资源不空闲。
S4:如果第m个MEC服务器上第i个计算资源被该子业务占用,则进一步部署该计算资源的保护(备份)计算资源。判断部署在第m个MEC服务器上的子业务是否在其它MEC服务器上建立保护子业务,如果是,则直接在对应部署保护子业务的服务器上以轮询的方式分配一个保护计算资源;如果否,则以轮询的方式确定保护服务器,并分配保护计算资源。
轮询的方式为基于轮询切分的业务调度策略,基于轮询切分的业务调度策略的具体过程为:
步骤C1:对一个业务集合中的任意一个业务,根据最短路由算法确定与当前业务最近的M个MEC服务器,将确定的M个服务器加入到可用MEC服务器集合E u中;
步骤C2:对可用MEC服务器集合中的任意一个业务,判断i时刻下MEC服务器的第j个计算资源是否空闲,如果空闲,则将该计算资源分配给步骤C1中的业务;如果不空闲,则终止分配;
步骤C3:统计可用MEC服务器集合中每个MEC服务器上部署业务分配的资源作为子业务的大小;
步骤C4:确定业务开始时间与结束时间,计算所有业务的完成时间T u
S5:判断第m个MEC服务器之外的工作服务器上的i个计算资源是否都被占用,如果是,则令i++,重新判断第m个MEC服务器之外的工作服务器 上的i个计算资源有没有都被占用;如果否,则将第m个MEC服务器之外的工作服务器上的i个计算资源分配给业务u中的子业务。
S6:判断在时隙j内所有工作服务器上可用的计算资源是否能满足业务u中所有子业务的计算资源和保护资源需求,如果是,则执行S7;如果否,则令j++,返回执行S3。
S7:停止分配,完成所有业务的计算和保护,此时业务u的时延T u为j,得到整个网络业务的完成时间T max=max{T u}。
一种面向移动边缘计算的分布式专用保护业务调度系统的实施例,系统包括由MEC服务器组成的大规模网络,所述MEC服务器上使用前述实施例中所述的面向移动边缘计算的分布式专用保护业务调度方法布置业务和业务对应的专用保护业务。
本实施例中的面向移动边缘计算的分布式专用保护业务调度方法还包括构建整数线性规划优化模型,以最小化业务时延的目标建立整数线性规划优化模型,在整数线性规划优化模型的基础上建立所述双轮询调度策略。本实施例中定义了面向MEC的分布式业务调度问题:已知一个物理MEC网络拓扑,该拓扑包含MEC节点和物理链路。在本文中,MEC节点包括无线接入点和与其相连的MEC服务器,每个MEC服务器提供一定数量的MEC计算资源,每条物理链路提供的网络带宽资源能够确保业务的部署;业务可用MEC节点通过Dijkstra最短路由算法获得;本问题的优化目标为:最小化业务完成的总时延。
所述以最小化业务时延的目标建立整数线性规划优化模型时,定义:
集合:U为网络中的业务集合,E为网络中的MEC节点集合,K u为业务u的子业务集合,E u为业务u可用的MEC节点集合,TS为可用的时隙集合;
参数:R u为业务u所需的MEC计算资源,u∈U,V m为MEC服务器m上 所能提供的总的MEC计算资源,本实施例中规定每个MEC上都预留固定的计算资源专门用于处理分布式业务,△为预设的极大值,本实施例中取值为1000000;
变量:
Figure PCTCN2022089422-appb-000020
为二进制变量,当MEC服务器m在时刻t被选为业务u的子业务k的计算节点时取值为1,否则为0;
Figure PCTCN2022089422-appb-000021
为二进制变量,当MEC服务器m被选为业务u的子业务k的计算节点时取值为1,否则为0;
Figure PCTCN2022089422-appb-000022
为整型变量,表示MEC服务器m在时刻t处为业务u的子业务k的提供的计算资源;
Figure PCTCN2022089422-appb-000023
为整型变量,完成切分后,业务u第k个子业务所需的MEC计算资源;
Figure PCTCN2022089422-appb-000024
为二进制变量,当MEC服务器m在时刻t被选为业务u的子业务k的保护业务的计算节点时取值为1,否则为0;
Figure PCTCN2022089422-appb-000025
为业务u的子业务k的所需的MEC保护计算资源;
Figure PCTCN2022089422-appb-000026
为MEC服务器m在时刻t为业务u的子业务k的保护业务提供的计算资源;T max为整型变量,用于表示所有业务的完成时间;
得到优化目标最小化业务时延为minimize:T max
以最小化业务时延的目标建立整数线性规划优化模型时,整数线性规划优化模型的约束条件包括(1)业务约束、(2)MEC服务器容量约束、(3)时延约束和(4)业务保护约束;
(1)所述业务约束包括:子业务所需的计算资源之和等于业务所需的资源量,每个服务器分配给其上子业务的资源量等于子业务需要承载的计算资源量,子业务必须部署于不同的服务器上进行处理;
所述业务约束的表达式为:
Figure PCTCN2022089422-appb-000027
表示业务u的每个子业务k只能部署于一个MEC服务器上;
Figure PCTCN2022089422-appb-000028
表示一个MEC服务器不能同时服务业 务u的任意两个子业务;
Figure PCTCN2022089422-appb-000029
表示业务u的任意两个子业务都必须部署于不同的服务器上进行处理;
Figure PCTCN2022089422-appb-000030
表示当MEC服务器m为业务u的子业务k提供计算资源后,该服务器m被选为子业务k的计算节点;
Figure PCTCN2022089422-appb-000031
表示服务器m给业务u的子业务k提供的计算资源总和等于子业务k所需的计算资源量,所有子业务k的计算资源量等于业务u的计算资源需求量。
(2)所述MEC服务器容量约束包括:每个MEC服务器上的使用的计算资源总和不能超过其最大可用计算资源量;
所述MEC服务器容量约束的表达式为:
Figure PCTCN2022089422-appb-000032
表示在任意时刻t处,MEC提供给子业务的计算资源和保护计算资之和不能超过其自身可用计算资源的最大值。
(3)所述时延约束包括:完成业务的总时延不能超过最大时隙数;
所述时延约束的表达式为:
Figure PCTCN2022089422-appb-000033
表示计算所有业务处理完的时间,所有业务处理完的时间不能小于MEC网络中任意业务的结束时间;
Figure PCTCN2022089422-appb-000034
表示计算所有业务处理完的时间,该时间不小于MEC网络中任意业务的完成时间,包括业务保护的时间。
(4)所述业务保护约束包括:被保护子业务所需的计算资源总和等于保护子业务所需计算资源总和,被保护子业务与对应的保护子业务分别部署在不同 的MEC服务器上;
所述业务保护约束的表达式为:
Figure PCTCN2022089422-appb-000035
Figure PCTCN2022089422-appb-000036
表示所有的保护业务不能和被保护业务调度到同一个服务器上,每个保护子业务只能由同一个MEC服务器给它提供计算资源;
Figure PCTCN2022089422-appb-000037
Figure PCTCN2022089422-appb-000038
Figure PCTCN2022089422-appb-000039
表示保护子业务和被保护子业务的切分形态完全一致。为了进一步说明本发明的有益效果,本实施例中在包括6个MEC节点和9条网络链路的n6s9网络、包括14个MEC节点和21条网络链路的NSFNET网络的两种测试网络情况下,将本发明中的双轮询调度策略(DS)与随机调度策略(RS)、环调度策略(CS)进行对比仿真实验。
随机调度策略的核心思想是确保被保护业务和保护业务不能被调度到同一个MEC服务器上。如图2的例子所示,现有分布式业务A,其需要的MEC计算资源为370units,经过基于轮询切分的业务调度策略后,每个子业务所需的计算资源如图2中总和为370的工作计算资源和保护计算资源所示,每个子业务所需可用的服务器如图2中的N0、N1、N2和N3所示。业务切分后,再调度保护业务,也就是随机选择与被保护业务不相连的MEC服务器并在其上放置保护计算资源。使用随机调度策略对MEC服务器进行业务调度时的具体步骤为:
步骤A1:获取分布式业务和该分布式业务需要的MEC计算资源,通过基于轮询切分的业务调度策略完成业务的切分与节点选择。基于轮询切分的业务 调度策略流程为:
步骤C1:对一个业务集合中的任意一个业务,根据最短路由算法,确定与当前业务最近的M个MEC服务器,将确定的M个服务器加入到可用MEC服务器集合E u中;
步骤C2:对可用MEC服务器集合中的任意一个业务,判断i时刻下MEC服务器的第j个计算资源是否空闲,如果空闲,则将该计算资源分配给步骤C1中的业务;如果不空闲,即已经达到业务所需的总资源,则终止分配;
步骤C3:统计可用MEC服务器集合中每个MEC服务器上部署业务分配的资源作为子业务的大小;
步骤C4:确定业务开始时间与结束时间,计算所有业务的完成时间T u
步骤A2:随机选择与被保护业务不相连的MEC服务器并放置保护计算资源,完成保护子业务的节点的选择。
步骤A2.1:在业务切分完毕后,生成每个业务的保护子业务列表;
步骤A2.2:对于每一个保护子业务,从可用MEC服务器集合中移除该保护子业务对应的子业务所在的MEC服务器,然后从中随机选取第m个MEC服务器,如果选择第m个MEC服务器,则为该保护子业务在选取的第m个MEC服务器上分配计算资源Pk u,恢复集合E u,执行步骤A2.3;
步骤A2.3:选择的第m个MEC服务器在第i个时隙上的可用计算资源为Vi m,判断公式Vi m-Pk u≥0是否成立,如果公式成立,则该保护子业务分配成功,此时保护子业务p的时延T p为i,执行步骤A2.4;如果公式不成立,则令Pk u=Vi m-Pk u和i++即进入i+1个时隙,重新判断公式Vi m-Pk u≥0是否成立;
步骤A2.4:重复步骤A2.3直到所有保护子业务都分配成功,得到完成所 有保护子业务u保护的总时延T pu=max{T p};
步骤A2.5:重复步骤A2.1~步骤A2.4直到完成所有业务的保护,得到整个网络业务的完成时间T max=max{T u+T pu}并输出。
环调度策略的核心思想是按照顺序的方式将保护子业务调度到和被保护的子业务不同的MEC服务器上。如图3的例子所示,假设分布式业务A需要的MEC计算资源为370units,由MEC服务器N0、N1、N2和N3分别给业务A的四个子业务提供计算资源和保护资源。经过基于轮询切分的分布式业务调度策略后,每个子业务所需的计算资源以及各自部署的MEC服务器如图2所示。业务切分后,首先生成每个子业务的保护子业务,接着需要将保护子业务调度到MEC服务器上处理,即按照环顺序给保护子业务部署计算资源,即子业务u0的保护业务P0部署在服务器N1上,子业务u1的保护业务P1部署在服务器N2上,而子业务u2和u3的各自的保护业务P2和P3分别部署在服务器N3和N0上。使用环调度策略对MEC服务器进行业务调度时的具体步骤为:
步骤B1:获取分布式业务和该分布式业务需要的MEC计算资源和保护资源,通过基于轮询切分的业务调度策略完成业务的切分与节点选择;
步骤B2:生成每个子业务的保护子业务,按照顺序的方式将保护子业务调度到和被保护的子业务不同的MEC服务器上处理。
步骤B2.1:在业务切分完毕后,对每个业务生成保护子业务列表;
步骤B2.2:遍历保护子业务列表,判断当前保护子业务p是不是最后一个保护子业务,如果不是,则选取第p+1个MEC服务器承载该保护子业务;如果是,将该保护子业务部署在列表E u中的第一个MEC服务器上;
步骤B2.3:在选取的MEC服务器上分配第p个保护子业务所需的保护计算资源
Figure PCTCN2022089422-appb-000040
步骤B2.4:判断选择的第m个MEC服务器在时隙j上第i个计算资源是不是空闲,如果是空闲,则业务u占用该计算资源,并使得业务u在第m个MEC服务器上的子业务的计算资源
Figure PCTCN2022089422-appb-000041
执行步骤B2.5;如果不是空闲,则令i++,重新判断选择的第m个MEC服务器在时隙j上第i个计算资源是不是空闲;
步骤B2.5:判断分配给保护子业务p的计算资源是否已达到所需的计算资源,如果已达到则执行步骤B2.6;如果没有达到,则另j++,返回执行步骤B2.4;
步骤B2.6:重复步骤B2.4~步骤B2.5直到所有保护子业务分配完成,得到完成所有保护子业务u保护的总时延为T pu=max{T p};
步骤B2.7:重复步骤B2.1~步骤B2.6直到完成所有业务的保护,得到整个网络业务的完成时间T max=max{T u+T pu}并输出。
如图5所示为包括6个MEC节点和9条网络链路的n6s9网络、如图6所示为包括14个MEC节点和21条网络链路的NSFNET网络,其中图5和图6中链路上的数字表示物理长度(km)。同时假设网络中每个MEC服务器最大可用计算资源为1000units,平均每个业务所需的计算资源为400units,每个MEC节点上产生的业务数量已知,总的时隙数设为200、单位为t,每个节点上的分布式业务数在一定范围内随机产生。分别从(1)网络中业务总时延、(2)切分数量对分布式专用保护业务总时延的影响、(3)MEC服务器的负载均衡三个角度进行评估。
(1)从网络中业务总时延的角度进行评估
在n6s9测试网络的环境下,分别使用随机调度策略(RS)、环调度策略(CS)、双轮询调度策略(DS)三种方法对网络的业务总时延进行比较。参数设置:n6s9网络中每个MEC节点上的业务数量在[X-5,X]范围内随机产生,NSFNET网络中每个MEC节点上的业务数量在[X-20,X]范围内随机产生,X是MEC节点上的平均业务量(单位:个),网络中每个节点的分布式业务的子业务切分数量 是四份。比较结果如图7所示,图7中的ILP为使用整数线性规划优化模型表示理论情况下的最佳情况。在NSFNET测试网络的环境下,分别使用随机调度策略(RS)、环调度策略(CS)、双轮询调度策略(DS)三种策略对网络的业务总时延进行比较,比较结果如图8所示。图7和图8中的x轴上的数值代表每个MEC节点上的平均业务量,y轴上的数值代表网络中的所有业务完成后的总时延。
从图7可以看出,在n6s9网络中双轮询调度策略(DS)与相应的ILP优化模型具有十分相近的性能,证明了双轮询调度策略(DS)的高效性。其次,当每个MEC节点上的平均业务数为30时,与随机调度策略(RS)和环调度策略(CS)相比,双轮询调度策略(DS)能有效地减少业务完成的总时延,分别减少了20%和14%。DS策略在分配子业务保护服务器时,充分考虑了服务器自身可用计算资源量,避免了一个服务器上承载同一个业务的两个子业务导致服务器间负载不均衡的情况,从而避免了业务堵塞和业务总时延的增加。由于计算资源的分配是一个NP-hard问题,因而ILP模型只适用于小规模问题的求解,当业务数量逐渐增加、网络规模逐渐增大时,ILP模型的时间复杂度会随之急剧上升,很难在有效的时间范围内找到大规模业务量的最优解,因此在NSFNET测试网络的环境下不使用ILP模型来表示理论情况下的最佳情况,但是从图8可以看出,与随机调度策略(RS)和环调度策略(CS)相比,双轮询调度策略(DS)业务完成的总时延分别减少了50%和29%,也证明了双轮询调度策略(DS)在最小化业务总时延方面的高效性。
(2)从切分数量对分布式专用保护业务总时延的影响的角度进行评估
在n6s9测试网络的环境下,随着切分数量的增加分别使用随机调度策略(RS)、环调度策略(CS)、双轮询调度策略(DS)三种方法对网络的业务总时延进行比较,并使用ILP模型得到理论情况下的最优值,比较结果如图9所示。在NSFNET测试网络的环境下,随着切分数量的增加分别使用随机调度策略(RS)、环调度策略(CS)、双轮询调度策略(DS)三种策略对网络的业务 总时延进行比较,比较结果如图10所示。图9和图10中的x轴表示切分数量,y轴表示业务总时延;n6s9网络中每个MEC节点上的业务数量在[20,25]范围内随机产生;NSFNET网络中每个MEC节点上的业务数量在[20,100]范围内随机产生。
从图9可以看出,随着切分数量的增加双轮询调度策略(DS)的结果更贴近ILP模型的结果,证明了双轮询调度策略(DS)的高效性。此外,与随机调度策略(RS)和环调度策略(CS)相比,双轮询调度策略(DS)能有效地减少业务完成的总时延。同时,当子业务数量为6时,采用双轮询调度策略(DS)得到的业务时延比随机调度策略(RS)减少了24%,这是因为双轮询调度策略(DS)考虑了MEC服务器上实际计算资源的使用情况,避免了出现在某些MEC服务器上集中了大量业务请求的情况。同样地,NSFNET测试网络的环境下不易使用ILP模型来表示理论情况下的最佳情况,但是从图10可以看出,双轮询调度策略(DS)的业务时延小于随机调度策略(RS)和环调度策略(CS);并且当子业务数量为7时,采用双轮询调度策略(DS)得到的业务时延比随机调度策略(RS)减少了36%、比环调度策略(CS)减少了24%,也证明了双轮询调度策略(DS)的高效性。
(3)从MEC服务器的负载均衡的角度进行评估
在n6s9测试网络的环境下,分别使用随机调度策略(RS)、环调度策略(CS)、双轮询调度策略(DS)三种方法对网络中每个MEC服务器上实际时隙占用情况进行比较,比较结果如图11所示。在NSFNET测试网络的环境下,分别使用随机调度策略(RS)、环调度策略(CS)、双轮询调度策略(DS)三种方法对网络中每个MEC服务器上实际时隙占用情况进行比较,比较结果如图12所示。图11和图12中x轴表示网络中的MEC节点,y轴表示服务器上的最大负载,n6s9网络中每个MEC节点上的业务数量在[10,50]范围内随机产生,NSFNET网络中每个MEC节点上的业务数量在[20,100]范围内随机产生,切分 数量是四份。使用方差公式作为衡量服务器间负载均衡的标准,方差S 2值越小,服务器间的负载更均衡。方差S 2=[(x 1-M) 2+(x 2-M) 2+(x 3-M) 2+...+(x n-M) 2]/n;其中M为该组数据的平均值,n为数据个数,x n为MEC服务器。
从图11的仿真实验可以得出,当业务切分成四份时,采用随机调度策略(RS)和环调度策略(CS)得到的服务器间负载方差分别为6.28和4.47,而采用双轮询调度策略(DS)得到的服务器间负载方差是1.48。这一结果再次证明了双轮询调度策略(DS)在负载均衡方面的高效性。图12也给出了在NSFNET网络中两种策略对服务器负载的影响,从图12可以看出,与n6s9网络的结果类似,采用RS策略和CS策略得到的服务器间负载方差分别为19.4和13.2,而采用DS策略得到的服务器间负载方差是5.9。双轮询调度策略(DS)的结果总是优于随机调度策略(RS)和环调度策略(CS)的结果,这一结果再次证明了所提出双轮询调度策略(DS)在负载均衡方面的性能更优。同时,从图11和图12中还可以看出,双轮询调度策略(DS)使得各个服务器上的负载相比于随机调度策略(RS)和环调度策略(CS)更加均衡,环调度策略(CS)获得各个服务器上的负载比随机调度策略(RS)更加均衡。这一结果是合理的,虽然随机调度策略(RS)考虑了业务保护的问题,但是未考虑MEC服务器的实际负载,同一个可用服务器上可能需要部署一个业务中的多个子业务的保护资源,这不仅可能导致该MEC服务器上的实际负载过大,其他服务器处于空闲状态,还会降低业务保护性能。假设发生故障的服务器下恰好是部署了很多子业务的服务器,这就会导致业务中断。相比于随机调度策略(RS),环调度策略(CS)将子业务分别调度到不同的MEC服务器上,当某个服务器不能按时处理完其承载的子业务,还可以在保护服务器上进行处理,确保了业务的完成性。虽然环调度策略(CS)考虑了业务保护的问题,但是未考虑MEC服务器的实际负载,可能导致某个MEC服务器上的实际负载过大,导致资源竞争严峻。相比于环调度策略(CS),双轮询调度策略(DS)不仅考虑了业务保护的问题,确保了业务能够100%完成;同时也考虑了服务器之间的负载均衡,避免了业务堵 塞。
本发明的上述技术方案相比现有技术具有以下优点:
本发明所述的面向移动边缘计算的分布式专用保护业务调度方法,通过以最小化网络中业务总时延为目标建立整数线性规划优化模型,并在此基础上建立分布式专用保护业务的启发式调度策略。通过考虑服务器上实际可用的计算资源,在保护业务的前提下最大程度降低网络中业务总时延;同时避免了MEC服务器上出现计算资源浪费或过载现象,实现了面向移动边缘计算的分布式业务的切分、子业务计算资源和保护计算资源的联合优化。通过考虑业务保护的问题,确保了在网络中单个MEC服务器发生故障的情形下,业务还能100%顺利完成。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流 程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,上述实施例仅仅是为清楚地说明所作的举例,并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引申出的显而易见的变化或变动仍处于本发明创造的保护范围之中。

Claims (10)

  1. 一种面向移动边缘计算的分布式专用保护业务调度方法,其特征在于,包括:
    将业务划分为多个子业务,并生成与多个子业务相对应的保护子业务;
    使用双轮询调度策略给子业务选择MEC服务器作为工作服务器的同时给保护子业务也选择MEC服务器作为工作服务器,并且子业务和保护子业务选择的MEC服务器不相同。
  2. 根据权利要求1所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于:所述使用双轮询调度策略的具体过程为:
    S1:对业务集合中的任意一个业务u,确定与业务u最近的M个MEC服务器加入到可用服务器集合E u中;
    S2:划分业务u生成子业务列表K u,生成子业务列表K u对应的保护子业务列表P u;从集合E u中依次选取MEC服务器作为子业务列表K u中的每个子业务的工作服务器;
    S3:在第j个时隙上,判断第m个MEC服务器上的第i个计算资源是否空闲,如果空闲则将第m个MEC服务器上的第i个计算资源分配给业务u中的子业务;如果不空闲,则判断第m+1个服务器上的第i个计算资源是否空闲,直到找到空闲计算资源并分配给业务u中的子业务;
    S4:判断部署在第m个MEC服务器上的子业务是否在其它MEC服务器上建立保护子业务,如果是,则直接在对应部署保护子业务的服务器上以轮询的方式分配一个保护计算资源;如果否,则以轮询的方式确定保护服务器,并分配保护计算资源;
    S5:判断第m个MEC服务器之外的工作服务器上的i个计算资源是否都被占用,如果是,则令i++,重新判断第m个MEC服务器之外的工作服务器上的i个计算资源有没有都被占用;如果否,则将第m个MEC服务器之外的工作服务器上的i个计算资源分配给业务u中的子业务;
    S6:判断在时隙j内所有工作服务器上可用的计算资源是否能满足业务u中所有子业务的计算资源和保护资源需求,如果是,则执行S7;如果否,则令j++,返回执行S3;
    S7:停止分配,完成所有业务的计算和保护,此时业务u的时延T u为j,得到整个网络业务的完成时间T max=max{T u}。
  3. 根据权利要求2所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于:所述S1中确定与业务u最近的M个MEC服务器时,使用的方法为最短路由算法。
  4. 根据权利要求2所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于:所述S4中轮询的方式为基于轮询切分的业务调度策略。
  5. 根据权利要求4所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于:所述基于轮询切分的业务调度策略的具体过程为:
    步骤C1:对一个业务集合中的任意一个业务,根据最短路由算法确定与当前业务最近的M个MEC服务器,将确定的M个服务器加入到可用MEC服务器集合E u中;
    步骤C2:对可用MEC服务器集合中的任意一个业务,判断i时刻下MEC服务器的第j个计算资源是否空闲,如果空闲,则将该计算资源分配给步骤C1中的业务;如果不空闲,则终止分配;
    步骤C3:统计可用MEC服务器集合中每个MEC服务器上部署业务分配 的资源作为子业务的大小;
    步骤C4:确定业务开始时间与结束时间,计算所有业务的完成时间T u
  6. 根据权利要求1所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于:还包括构建整数线性规划优化模型,以最小化业务时延的目标建立整数线性规划优化模型,在整数线性规划优化模型的基础上建立所述双轮询调度策略。
  7. 根据权利要求6所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于:所述以最小化业务时延的目标建立整数线性规划优化模型时,定义U为网络中的业务集合,E为网络中的MEC节点集合,K u为业务u的子业务集合,E u为业务u可用的MEC节点集合,TS为可用的时隙集合;R u为业务u所需的MEC计算资源,u∈U,V m为MEC服务器m上所能提供的总的MEC计算资源,△为预设的极大值;
    Figure PCTCN2022089422-appb-100001
    为二进制变量,当MEC服务器m在时刻t被选为业务u的子业务k的计算节点时取值为1,否则为0;
    Figure PCTCN2022089422-appb-100002
    为二进制变量,当MEC服务器m被选为业务u的子业务k的计算节点时取值为1,否则为0;
    Figure PCTCN2022089422-appb-100003
    为整型变量,表示MEC服务器m在时刻t处为业务u的子业务k的提供的计算资源;
    Figure PCTCN2022089422-appb-100004
    为整型变量,完成切分后,业务u第k个子业务所需的MEC计算资源;
    Figure PCTCN2022089422-appb-100005
    为二进制变量,当MEC服务器m在时刻t被选为业务u的子业务k的保护业务的计算节点时取值为1,否则为0;
    Figure PCTCN2022089422-appb-100006
    为业务u的子业务k的所需的MEC保护计算资源;
    Figure PCTCN2022089422-appb-100007
    为MEC服务器m在时刻t为业务u的子业务k的保护业务提供的计算资源;T max为整型变量,用于表示所有业务的完成时间;
    得到优化目标最小化业务时延为minimize:T max
  8. 根据权利要求7所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于:所述以最小化业务时延的目标建立整数线性规划优化模型时,整数线性规划优化模型的约束条件包括业务约束、MEC服务器容量约束、时延 约束和业务保护约束;
    所述业务约束包括:子业务所需的计算资源之和等于业务所需的资源量,每个服务器分配给其上子业务的资源量等于子业务需要承载的计算资源量,子业务必须部署于不同的服务器上进行处理;
    所述MEC服务器容量约束包括:每个MEC服务器上的使用的计算资源总和不能超过其最大可用计算资源量;
    所述时延约束包括:完成业务的总时延不能超过最大时隙数;
    所述业务保护约束包括:被保护子业务所需的计算资源总和等于保护子业务所需计算资源总和,被保护子业务与对应的保护子业务分别部署在不同的MEC服务器上。
  9. 根据权利要求8所述的面向移动边缘计算的分布式专用保护业务调度方法,其特征在于,所述业务约束的表达式为:
    Figure PCTCN2022089422-appb-100008
    表示业务u的每个子业务k只能部署于一个MEC服务器上;
    Figure PCTCN2022089422-appb-100009
    表示一个MEC服务器不能同时服务业务u的任意两个子业务;
    Figure PCTCN2022089422-appb-100010
    表示业务u的任意两个子业务都必须部署于不同的服务器上进行处理;
    Figure PCTCN2022089422-appb-100011
    表示当MEC服务器m为业务u的子业务k提供计算资源后,该服务器m被选为子业务k的计算节点;
    Figure PCTCN2022089422-appb-100012
    表示服务器m给业务u的子业务k提供的计算资源总和等于子业务k所需的计 算资源量,所有子业务k的计算资源量等于业务u的计算资源需求量;
    所述MEC服务器容量约束的表达式为:
    Figure PCTCN2022089422-appb-100013
    表示在任意时刻t处,MEC提供给子业务的计算资源和保护计算资之和不能超过其自身可用计算资源的最大值;
    所述时延约束的表达式为:
    Figure PCTCN2022089422-appb-100014
    表示计算所有业务处理完的时间,该时间不能小于MEC网络中任意业务的结束时间;
    Figure PCTCN2022089422-appb-100015
    表示计算所有业务处理完的时间,所有业务处理完的时间不小于MEC网络中任意业务的完成时间,包括业务保护的时间;
    所述业务保护约束的表达式为:
    Figure PCTCN2022089422-appb-100016
    Figure PCTCN2022089422-appb-100017
    表示所有的保护业务不能和被保护业务调度到同一个服务器上,每个保护子业务只能由同一个MEC服务器给它提供计算资源;
    Figure PCTCN2022089422-appb-100018
    Figure PCTCN2022089422-appb-100019
    表示保护子业务和被保护子业务的切分形态完全一致。
  10. 一种面向移动边缘计算的分布式专用保护业务调度系统,其特征在于:包括由MEC服务器组成的大规模网络,所述MEC服务器上使用如权利要求1-5任一项所述的面向移动边缘计算的分布式专用保护业务调度方法布置业务和业务对应的专用保护业务。
PCT/CN2022/089422 2021-06-11 2022-04-27 面向移动边缘计算的分布式专用保护业务调度方法 WO2022257631A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/195,591 US20230283527A1 (en) 2021-06-11 2023-05-10 Method for scheduling mobile edge computing-oriented distributed dedicated protection services

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110656893.5 2021-06-11
CN202110656893.5A CN113179331B (zh) 2021-06-11 2021-06-11 面向移动边缘计算的分布式专用保护业务调度方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/195,591 Continuation US20230283527A1 (en) 2021-06-11 2023-05-10 Method for scheduling mobile edge computing-oriented distributed dedicated protection services

Publications (1)

Publication Number Publication Date
WO2022257631A1 true WO2022257631A1 (zh) 2022-12-15

Family

ID=76928018

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/089422 WO2022257631A1 (zh) 2021-06-11 2022-04-27 面向移动边缘计算的分布式专用保护业务调度方法

Country Status (3)

Country Link
US (1) US20230283527A1 (zh)
CN (1) CN113179331B (zh)
WO (1) WO2022257631A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809147A (zh) * 2023-01-16 2023-03-17 合肥工业大学智能制造技术研究院 多边缘协作缓存调度优化方法、系统及模型训练方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113179331B (zh) * 2021-06-11 2022-02-11 苏州大学 面向移动边缘计算的分布式专用保护业务调度方法
CN113709249B (zh) * 2021-08-30 2023-04-18 北京邮电大学 辅助驾驶业务安全均衡卸载方法及系统
CN113938953B (zh) * 2021-09-28 2022-07-12 苏州大学 一种基于边缘计算的FiWi网络负载均衡方法及系统
CN117527807B (zh) * 2023-11-21 2024-05-31 扬州万方科技股份有限公司 一种多微云任务调度方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304250A (zh) * 2018-03-05 2018-07-20 北京百度网讯科技有限公司 用于确定运行机器学习任务的节点的方法和装置
CN111770477A (zh) * 2020-06-08 2020-10-13 中天通信技术有限公司 一种mec网络的保护资源的部署方法及相关装置
CN112888005A (zh) * 2021-02-26 2021-06-01 中天通信技术有限公司 一种面向mec的分布式业务调度方法
CN113179331A (zh) * 2021-06-11 2021-07-27 苏州大学 面向移动边缘计算的分布式专用保护业务调度方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102281201A (zh) * 2011-08-29 2011-12-14 中国联合网络通信集团有限公司 路由选路和资源分配方法及装置
CN103763373A (zh) * 2014-01-23 2014-04-30 浪潮(北京)电子信息产业有限公司 一种基于云计算的调度方法和调度器
US9442760B2 (en) * 2014-10-03 2016-09-13 Microsoft Technology Licensing, Llc Job scheduling using expected server performance information
CN111381950B (zh) * 2020-03-05 2023-07-21 南京大学 一种面向边缘计算环境基于多副本的任务调度方法和系统
CN112148505A (zh) * 2020-09-18 2020-12-29 京东数字科技控股股份有限公司 数据跑批系统、方法、电子设备和存储介质
CN112463535B (zh) * 2020-11-27 2024-05-10 中国工商银行股份有限公司 多集群异常处理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304250A (zh) * 2018-03-05 2018-07-20 北京百度网讯科技有限公司 用于确定运行机器学习任务的节点的方法和装置
CN111770477A (zh) * 2020-06-08 2020-10-13 中天通信技术有限公司 一种mec网络的保护资源的部署方法及相关装置
CN112888005A (zh) * 2021-02-26 2021-06-01 中天通信技术有限公司 一种面向mec的分布式业务调度方法
CN113179331A (zh) * 2021-06-11 2021-07-27 苏州大学 面向移动边缘计算的分布式专用保护业务调度方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG TONG: "Protection Strategies for Mobile Edge Computing Network", CHINESE MASTER’S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY, no. 2, 15 February 2021 (2021-02-15), XP093014492 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809147A (zh) * 2023-01-16 2023-03-17 合肥工业大学智能制造技术研究院 多边缘协作缓存调度优化方法、系统及模型训练方法
CN115809147B (zh) * 2023-01-16 2023-04-25 合肥工业大学智能制造技术研究院 多边缘协作缓存调度优化方法、系统及模型训练方法

Also Published As

Publication number Publication date
CN113179331B (zh) 2022-02-11
CN113179331A (zh) 2021-07-27
US20230283527A1 (en) 2023-09-07

Similar Documents

Publication Publication Date Title
WO2022257631A1 (zh) 面向移动边缘计算的分布式专用保护业务调度方法
Dakshayini et al. An optimal model for priority based service scheduling policy for cloud computing environment
CN108268317B (zh) 一种资源分配方法及装置
CN108965014A (zh) QoS感知的服务链备份方法及系统
Fei et al. Towards load-balanced VNF assignment in geo-distributed NFV infrastructure
CN110308967B (zh) 一种基于混合云的工作流成本-延迟最优化任务分配方法
CN113784373B (zh) 云边协同网络中时延和频谱占用联合优化方法及系统
EP3451727A1 (en) Access scheduling method and device for terminal, and computer storage medium
EP3218807A1 (en) Method and system for real-time resource consumption control in a distributed computing environment
CN114071582A (zh) 面向云边协同物联网的服务链部署方法及装置
Zhang et al. Reservation-based resource scheduling and code partition in mobile cloud computing
CN111159859B (zh) 一种云容器集群的部署方法及系统
CN106998340B (zh) 一种板卡资源的负载均衡方法及装置
Tseng et al. An mec-based vnf placement and scheduling scheme for ar application topology
Turner et al. Meeting users’ QoS in a edge-to-cloud platform via optimally placing services and scheduling tasks
Yue et al. Cloud server job selection and scheduling in mobile computation offloading
Xu et al. Mroco: A novel approach to structured application scheduling with a hybrid vehicular cloud-edge environment
CN112685167A (zh) 资源使用方法、电子设备和计算机程序产品
CN104010374B (zh) 一种进行业务调度的方法及装置
CN113986511A (zh) 任务管理方法及相关装置
Yuan et al. A novel algorithm for embedding dynamic virtual network request
He et al. QoS-Aware and Resource-Efficient Dynamic Slicing Mechanism for Internet of Things.
CN113645077A (zh) 一种多专线非对称带宽规划方法、装置、设备及介质
Laredo et al. Designing a self-organized approach for scheduling bag-of-tasks
Turner et al. Meeting QoS of users in a edge to cloud platform via optimally placing services and scheduling tasks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22819233

Country of ref document: EP

Kind code of ref document: A1