CN114661445A - Scheduling method, device and equipment - Google Patents

Scheduling method, device and equipment Download PDF

Info

Publication number
CN114661445A
CN114661445A CN202210356760.0A CN202210356760A CN114661445A CN 114661445 A CN114661445 A CN 114661445A CN 202210356760 A CN202210356760 A CN 202210356760A CN 114661445 A CN114661445 A CN 114661445A
Authority
CN
China
Prior art keywords
request
scheduled
target
stream
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210356760.0A
Other languages
Chinese (zh)
Inventor
童鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210356760.0A priority Critical patent/CN114661445A/en
Publication of CN114661445A publication Critical patent/CN114661445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the application provides a scheduling method, a scheduling device and scheduling equipment. The method comprises the following steps: acquiring a request to be scheduled of an object; putting the request to be scheduled into a corresponding position of a corresponding request stream in a queue to serve as the request to be scheduled of the corresponding request stream, wherein the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object; according to the weight of the request flow, selecting a target request flow from the request flows with the requests to be scheduled, selecting a target request to be scheduled of the target request flow from the queue, and taking the target request to be scheduled out of the queue for executing the target request to be scheduled. The method and the device not only realize that the weight of the object (such as a user) to which the request to be scheduled belongs is considered during scheduling, and meet the scheduling requirement, but also have low scheduling complexity, and are a scheduling mode which can give consideration to both weight fairness and scheduling efficiency.

Description

Scheduling method, device and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a scheduling method, apparatus, and device.
Background
The distributed storage system can comprise data nodes, and the data nodes are processes for actually storing user data in the distributed storage system, can interact with a system client process and provide data read-write capability.
Generally, an Input/Output (IO) request of a user may first come to a scheduler in a data node for queuing, and then the scheduler selects an IO request for dequeuing according to a scheduling policy for subsequent service.
Disclosure of Invention
The embodiment of the application provides a scheduling method, a scheduling device and scheduling equipment, which are used for solving the problem that the scheduling requirement cannot be met in the prior art.
In a first aspect, an embodiment of the present application provides a scheduling method, including:
acquiring a request to be scheduled of an object;
putting the request to be scheduled into a corresponding position of a corresponding request stream in a queue to serve as the request to be scheduled of the corresponding request stream, wherein the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object;
according to the weight of the request flow, selecting a target request flow from the request flows with the requests to be scheduled, selecting a target request to be scheduled of the target request flow from the queue, and taking the target request to be scheduled out of the queue for executing the target request to be scheduled.
In a second aspect, an embodiment of the present application provides a scheduling apparatus, including:
the acquisition module is used for acquiring a to-be-scheduled request of an object;
the enqueuing module is used for putting the request to be scheduled into a corresponding position of a corresponding request stream in a queue to serve as the request to be scheduled of the corresponding request stream, the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object;
and the dequeuing module is used for selecting a target request stream from the request streams with the requests to be scheduled according to the weight of the request stream, selecting the target requests to be scheduled of the target request stream from the queue, and taking the target requests to be scheduled out of the queue for executing the target requests to be scheduled.
In a third aspect, an embodiment of the present application provides a computer device, including: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed, implements the method according to any one of the first aspect.
Embodiments of the present application also provide a computer program, which is used to implement the method according to any one of the first aspect when the computer program is executed by a computer.
In the embodiment of the application, abstract request streams are used as scheduling granularity, the request streams are obtained according to object abstraction, the weight of the request streams is the weight of corresponding objects, a target request stream is selected according to the weight of the request streams, then a target to-be-scheduled request of the target request stream is selected from a queue, and the target to-be-scheduled request is taken out from the queue for executing the target to-be-scheduled request.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario of a scheduling method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the role of a scheduler in the prior art;
FIG. 3 is a diagram illustrating a prior art scheduling method using a RoundRobin policy;
fig. 4 is a schematic flowchart of a scheduling method according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating scheduling with request streams as scheduling granularity according to an embodiment of the present application;
fig. 6 is a schematic diagram of scheduling with request streams as scheduling granularity according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a" and "an" typically include at least two, but do not exclude the presence of at least one.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 1 is a schematic view of an application scenario of a scheduling method provided in an embodiment of the present application, and as shown in fig. 1, the application scenario may include a plurality of first devices 11 and a second device 12, where the first device 11 may send a request to be scheduled to the second device 12, a scheduler X may be run in the second device 12, and the scheduler X run in the second device 12 may schedule the request to be scheduled sent by the plurality of first devices 11, so as to implement resource sharing among the plurality of first devices 11. It should be understood that the resource shared by the plurality of first devices may be a resource of the second device 12, or may be a resource of a device other than the second device 12.
It should be noted that the scheduling method provided by the embodiment of the present application may be applied to any type of scenario that needs to schedule a to-be-scheduled request according to a weight. The request to be scheduled may be, for example, an IO request, a network request, or the like, and of course, in other embodiments, the request to be scheduled may also be other types of requests. Taking application to a distributed storage system as an example, the request to be scheduled may be an IO request, and the method provided in the embodiment of the present application may be executed by a data node in the distributed storage system, that is, the second device 12 in fig. 1 may be a device where the data node is located.
The scheduler X may be understood as a queuing System (System of Queues), the request to be scheduled may be understood as a queuing unit (queue units), the queuing unit from the first device 11 needs to enter the queuing System for queuing, and the queuing System determines an output sequence of the request to be scheduled from the queuing System, thereby determining an occupied resource sequence of the request to be scheduled.
In one embodiment, the first device 11 may be a device of a User, on which a client process may run, the client process may execute a User Task (User Task), and the client process may generate a to-be-scheduled request that needs to be handed to the scheduler X for scheduling during the process of executing the User Task. As shown in fig. 2, for a to-be-scheduled request generated in the process of executing a user task, the to-be-scheduled request may be used as a queuing unit, and enters a queuing system for queuing, and a dequeuing (Dequeue) sequence is determined by the queuing system, and the dequeued to-be-scheduled request may be actually occupied by resources. In addition, the resource may feed back the scheduling logic during the use process, for example, the process priority may be dynamically adjusted according to an occupied CPU time slice (TimeSlice), and the scheduling may be back-pressed according to the overall occupation condition of the network/IO bandwidth, which corresponds to the FeedBack in fig. 2.
In practical application, the scheduler X may put the acquired requests to be scheduled into a queue, and schedule the requests to be scheduled in the queue, that is, select one request to be scheduled from the queue at each scheduling, and take out the request to be scheduled from the queue for executing the request to be scheduled. It should be understood that the request to be scheduled may be performed by the second device 12 or may be performed by a device other than the second device.
Generally, a scheduler schedules a request to be scheduled in a queue by using a user-based polling (RoundRobin) policy, taking the request to be scheduled as an IO request as an example, as shown in fig. 3, it is assumed that the queue includes three data structures (which may be denoted as slots) for storing data, and the three slots are respectively slot 1, slot 2, and slot 3 from front to back in the queue, where the slot 1 stores the IO request 1 and IO request 2 of a user 1 (i.e., Uid1), the IO request 3, IO request 4, and IO request 5 of a user 2 (i.e., Uid2), and the IO request 1 and IO request 2 of a user 3 (i.e., Uid 3); the slot 2 does not store the IO request of the user, and the slot 3 stores the IO request 1 and the IO request 2 of the user 4 (i.e., Uid4) and the IO request 1 and the IO request 2 of the user 5 (i.e., Uid5), so the scheduling sequence may be from front to back: uid1.io request 1 → uid2.io request 3 → uid3.io request 1 → uid4.io request 1 → uid5.io request 1 → uid1.io request 2 → uid2.io request 4 → uid3.io request 2 → uid4.io request 2 → uid5.io request 2 → uid2.io request 5.
The routrobin strategy has low scheduling complexity, but cannot utilize the weight attribute of the request to be scheduled, and cannot ensure the weight fairness of services among the requests of users with different weights. However, with the introduction of QoS functions and characteristics such as priority, scheduling needs to be performed according to the weight of the user.
In order to solve the technical problem of how to schedule a to-be-scheduled request according to weight in the prior art, in the embodiment of the application, an abstract request stream is taken as a scheduling granularity, the request stream is obtained by abstracting according to an object, the weight of the request stream is the weight of the corresponding object, a target request stream is selected according to the weight of the request stream, then a target to-be-scheduled request of the target request stream is selected from a queue, and the target to-be-scheduled request is taken out from the queue for executing the target to-be-scheduled request.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 4 is a flowchart illustrating a scheduling method according to an embodiment of the present application, where an execution subject of the embodiment may be the second device 12 in fig. 1. As shown in fig. 4, the method of this embodiment may include:
step 41, obtaining a to-be-scheduled request of an object;
step 42, putting the request to be scheduled into a corresponding position of a corresponding request stream in a queue to serve as the request to be scheduled of the corresponding request stream, wherein the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object;
and 43, selecting a target request stream from the request streams with the requests to be scheduled according to the weight of the request streams, selecting a target request to be scheduled of the target request stream from the queue, and taking out the target request to be scheduled from the queue for executing the target request to be scheduled.
In the embodiment of the application, the meaning of the object can be related to a specific application scene. For example, in a scenario where scheduling is to enable different users to share resources, the object may specifically be a user. For another example, in a scenario where scheduling is to enable different programs to share resources, the object may specifically be a program. Different objects may have different weights, which may be taken into account when scheduling.
In the embodiment of the application, the request stream can be obtained according to object abstraction, the requests to be scheduled of the same object can be abstracted into the same request stream, and the requests to be scheduled of different objects can be abstracted into different request streams. The request stream has a weight, and the weight of the request stream is the weight of the corresponding object. A higher weight may indicate a higher degree of importance, i.e. the more important, the weights of all request streams abstracted may be completely different or may be partially the same.
For example, assuming that the object sending the request to be scheduled may be an object a, an object B, and an object C, and the weight of the object a is 10, the weight of the object B is 10, and the weight of the object C is 30, the request to be scheduled of the object a may be abstracted into one request stream (which may be denoted as a request stream x), the weight of the request stream x is 10, the request to be scheduled of the object B may be abstracted into another request stream (which may be denoted as a request stream y), the weight of the request stream y is 10, the request to be scheduled of the object C may be abstracted into another request stream (which may be denoted as a request stream z), and the weight of the request stream z is 30.
The request stream has a corresponding position in the queue for storing the request to be scheduled, and the queue may include a plurality of storage structures for storage, and one storage structure may serve as a corresponding position of one or more request streams in the queue. The corresponding position of a certain request stream in the queue is used for storing the to-be-scheduled request belonging to the object corresponding to the request stream, for example, the corresponding position of the request stream x in the queue is used for storing the to-be-scheduled request of the object a.
After the to-be-scheduled request of the object is obtained, the to-be-scheduled request may be placed in a corresponding position of a corresponding request stream in a queue to serve as the to-be-scheduled request of the corresponding request stream, where a specific manner of obtaining the to-be-scheduled request is not limited in this application, and may be, for example, the to-be-scheduled request of the received object. For example, assuming that a to-be-scheduled request that is the object a is acquired, the to-be-scheduled request may be placed in a corresponding position of the request stream x in the queue, so as to serve as the to-be-scheduled request of the request stream x.
In the embodiment of the application, the request stream can be used as a scheduling granularity, the target request stream is selected according to the weight of the request stream, and then the request to be scheduled of the target request stream is selected for scheduling, so that the request to be scheduled of the request stream is scheduled according to the weight of the request stream (namely, the weight of the corresponding object), and thus the request streams of different objects can share resources according to the weight. The scheduling complexity can be reduced by taking the request flow as the scheduling granularity, and the scheduling complexity of taking the request flow as the scheduling granularity is log (k), wherein k is the number of the request flows with the requests to be scheduled, so that the scheduling efficiency can be improved.
Specifically, according to the weight of the request stream, a target request stream may be selected from the request streams having the requests to be scheduled, a target request to be scheduled of the target request stream may be selected from the queue, and the target request to be scheduled may be taken out from the queue for executing the target request to be scheduled. A request stream in which there is a request to be scheduled may also be referred to as a non-empty request stream.
In an embodiment, selecting a target to-be-scheduled request of a target request stream from a queue may specifically include: and selecting a target to-be-scheduled request of the target request stream from the queue according to a first-in first-out principle. Therefore, the requests to be scheduled, which are put into the queue first, of the target request flow can be taken out from the queue preferentially, so that the requests to be scheduled, which are put into the queue first, of the same request flow, can be executed first.
In the embodiment of the present application, a corresponding scheduling value may be maintained for the request stream, and the scheduling value of the request stream may be related to the scheduled number and weight of the request stream. Wherein the scheduled number of the request stream may refer to a total number of requests to be scheduled that have been scheduled since the request stream participated in scheduling. Optionally, the scheduling value may specifically be a scheduling accumulation value, and the scheduling accumulation value of a request stream may be positively correlated with the scheduled number of the request stream and negatively correlated with the weight of the request stream.
Based on this, in an embodiment, selecting a target request stream from the request streams having the request to be scheduled according to the weight of the request stream specifically includes: and selecting the request flow with the minimum scheduling accumulation value as a target request flow from the request flows with the requests to be scheduled. Correspondingly, the method provided by the embodiment may further include: the scheduling accumulation value of the target request stream is updated. For example, the scheduling accumulation value of the target request stream may be updated once every request to be scheduled of the target request stream.
Since the scheduling accumulation value is inversely related to the weight, the larger the weight of the request stream is, the lower the change rate of the scheduling accumulation value is. In addition, since the target request stream having the smallest scheduling accumulation value is selected, the higher the weight of the request stream is, the lower the change rate of the scheduling accumulation value is, and the higher the priority is to be scheduled compared to the request stream having the lower weight.
Further alternatively, the scheduling accumulation value of the target request stream may be updated according to an accumulation factor of the target request stream, and the magnitude of the accumulation factor of the request stream is inversely related to the weight of the request stream. For example, the sum of the accumulation factor of the target request flow and the scheduling accumulation value of the target request flow may be used as the updated scheduling accumulation value of the target request flow.
Optionally, in order to facilitate determining the accumulation factor of the newly added request stream in a scenario where the number of the request streams is allowed to increase, the accumulation factor of the request streams may be determined according to the reference weight. In one embodiment, the cumulative factor for the request stream may be equal to the ratio of the reference weight to the weight of the request stream. For example, for request stream x, request stream y, and request stream z, assuming that the reference weight is 10, the cumulative factor for request stream x may be equal to 1, the cumulative factor for request stream y may be equal to 1, and the cumulative factor for request stream z may be equal to 1/3.
Alternatively, the target request stream may be determined at each scheduling. Based on this, in an embodiment, step 43 may specifically include: and selecting a target request stream from the request streams with the requests to be scheduled, selecting a target request to be scheduled of the target request stream from the queue, and taking out the request to be scheduled from the queue. Therefore, the request flow scheduled each time is the request flow with the minimum scheduling accumulation value.
Or alternatively, the determined target request stream may be used in a continuous multiple scheduling. Based on this, in another embodiment, step 43 may specifically include: and during each scheduling, if the continuous scheduling quantity of the first target request flow corresponding to the previous scheduling in the current scheduling reaches the single-round scheduling quantity threshold of the first target request flow, or the first target request flow does not have a request to be scheduled currently, selecting a second target request flow from the request flows with the requests to be scheduled, selecting a target request to be scheduled of the second target request flow from the queue, and taking out the request to be scheduled, otherwise, selecting a target request to be scheduled of the first target request flow from the queue, and taking out the request to be scheduled. Therefore, the next request stream can be switched after a plurality of requests to be scheduled of the same request stream are continuously output, and the scheduling overhead of each request to be scheduled is further thinned.
The continuous scheduling number of the request stream in the scheduling of the current round may be the number of requests to be scheduled, in which the request stream is continuously scheduled, in the scheduling of the current round. The threshold of the number of single-round schedules of the request flow is positively correlated to the weight of the request flow, and in one embodiment, the threshold of the number of single-round schedules of the request flow may be equal to a ratio of the weight of the request flow to a reference weight multiplied by a reference number threshold, where the reference number threshold corresponds to the reference weight, and the reference number threshold may be understood as the threshold of the number of single-round schedules of the request flow whose weight is the reference weight.
In the embodiment of the application, from the beginning of scheduling, every N to-be-scheduled requests are scheduled in one round, where N is a positive integer and may be referred to as a total scheduling number in a single round, and a time occupied by the scheduling in one round may be understood as a scheduling period. The total number of single round schedules may be equal to the sum of the thresholds for the number of single round schedules for all request flows. Since the single-round scheduling number threshold of the request flow is positively correlated with the weight of the request flow, the proportion of the single-round scheduling number threshold of the request flow in the single-round total scheduling number is positively correlated with the weight of each request flow.
The scheduling number of request streams in the single-round scheduling may be equal to a single-round scheduling number threshold of the request streams, and/or the scheduling number of first request streams in the single-round scheduling may be smaller than the single-round scheduling number threshold of the first request streams, and the scheduling number of second request streams may be larger than the single-round scheduling number threshold of the second request streams.
For example, assuming that the reference weight is equal to 10 and the reference number threshold is equal to 1, the single-round scheduling number threshold of the request flow x is 1, the single-round scheduling number threshold of the request flow y is equal to 1, the single-round scheduling number threshold of the request flow z is equal to 3, and the scheduling order of the multi-round scheduling for the request flow x (i.e., Flowx), the request flow y (i.e., Flowy), and the request flow z (i.e., Flowz) may be as shown in fig. 5. In a first scheduling period (i.e., schedule epoch1), 1+1+ 3-5 IO requests are scheduled in common, and the requests are IO request z1 requesting stream z, IO request x1 requesting stream x, IO request y1 requesting stream y, IO request z2 requesting stream z, and IO request z3 respectively; in the second scheduling period (i.e., schedule epoch2), the 5 scheduled IO requests are IO request z4 requesting flow z, IO request x2 requesting flow x, IO request y2 requesting flow y, IO request z5 requesting flow z, and IO request z6, respectively; in the third scheduling period (i.e., schedule epoch2), since there is no request to be scheduled in request stream z, the 5 scheduled IO requests are IO request x3 requesting stream x, IO request x4 requesting stream x, IO request y3 requesting stream y, IO request y4 requesting stream y, and IO request x5 requesting stream x, respectively.
It can be seen that, in the first round of scheduling and the second round of scheduling, the scheduling number of each request flow is equal to the single-round scheduling number threshold of the request flow; in the third round of scheduling, the scheduling number of the request stream x is greater than the single round scheduling number threshold of the request stream x, the scheduling number of the request stream y is greater than the single round scheduling number threshold of the request stream y, and the scheduling number of the request stream z is less than the single round scheduling number threshold of the request stream z.
In an embodiment, taking the request to be scheduled as an IO request as an example, the request stream may be denoted as an IO stream, for example, the ith IO stream may be denoted as an IO stream (i), and the scheduling attribute of the IO stream (i) may include a single-round scheduling number threshold of the IO stream (i) and a scheduling accumulated value of the IO stream (i).
The single-round scheduling number threshold of IO flow (i) may satisfy the following formula (1).
Figure BDA0003576603400000071
Wherein DispatchSlice (i) represents the threshold of the single-round scheduling number of the IO stream (i), weight (i) represents the weight of the IO stream (i), DefaultWeight represents the reference weight, and DefaultDispatchSlice represents the threshold of the reference number.
The scheduled cumulative value of IO flow (i) may satisfy how equation (2):
Figure BDA0003576603400000072
wherein, virtualdispatchslice (i) represents the scheduling cumulative value of the IO stream (i), DefaultWeight represents the reference weight, weight (i) represents the weight of the IO stream (i), and currentslice (i) represents the continuous scheduling number of the IO stream (i) in the current scheduling (i.e. in the current scheduling period), and the range is [0, dispatchslice (i) ].
It should be noted that, in the formula (2), it is exemplified that the determined target request stream is used in the continuous multiple scheduling, and the virtualdispatchslice (i) of the IO stream (i) is updated once every currentslice (i) of the IO stream (i) is continuously scheduled, in this case, the virtualdispatchslice (i) of the IO stream (i) may be set to 0 after the virtualdispatchslice (i) of the IO stream (i) is updated. The scheduler may initialize virtualdispatchslice (i) to 0 during initialization.
The IO stream scheduling policy may be to select a non-empty IO stream corresponding to MinVirtualDispatcchSlice
Minvirtualdispatcchslice may satisfy the following equation (3).
Min VirtualDispatchslice Min (VirtualDispatchslice (i)) -formula (3)
During the initialization process, the scheduler may initialize MinVirtualDispatchSlice to 0.
It can be seen from the above equations (1) to (3) that the larger Weight (i) of IO stream (i), the larger the dipatchslice (i) occupancy, and the lower the virtualdisplatchslice (i) rate of change, the more preferentially scheduled IO stream than IO stream with low Weight.
Assuming that the Weight of Flow1 is 10 (i.e., Weight (Flow1) ═ 10), the Weight of Flow2 is 20 (i.e., Weight (Flow2) ═ 20), the Weight of Flow3 is 30 (i.e., Weight (Flow3) ═ 30), the reference Weight is 10 (i.e., DefaultWeight ═ 10), the reference number threshold is 1 (i.e., DefaultDispatchSlice 1), the single-wheel schedule number threshold of Flow1 is 1 (i.e., DispatchSlice (Flow1) > 1), the single-wheel schedule number threshold of Flow2 is 2 (i.e., DispatchSlice (Flow2) ═ 2), the single-wheel schedule number threshold of Flow3 is 3 (i.e., dispatchsync (Flow3) > 3), the total single-wheel schedule number is 6 (i.e., epoch 6), as shown in fig. 6:
initially, VirtualDispatchSlice (i.e., VDSilce) for flows 1, 2, and 3 is initialized to 0. In the case where VDSlice of flows 1, 2, and 3 is equal to 0, any one of flows 1, 2, and 3 that is not empty may be selected as a target Flow (i.e., a target request Flow), and assuming that Flow1 is selected as a target Flow, IO request a1 of Flow1 in the queue may be dequeued and VDSlice of Flow1 may be updated to 1 (i.e., VDSlice is 0+1), as shown in fig. 6.
Thereafter, in the case where VDSlice of Flow1 is equal to 1 and VDSlice of Flow2 and Flow3 is equal to 0, any one of Flow2 and Flow3 that is not empty may be selected as the target Flow, and assuming that Flow3 is selected as the target Flow, IO request c1, IO request c2, and IO request c3 of Flow3 in the queue may be sequentially dequeued and VDSlice of Flow3 may be updated to 1 as shown in fig. 6 (i.e.,
Figure BDA0003576603400000081
)。
thereafter, in the case where VDSlice of flows 1 and 3 is equal to 1 and VDSlice of Flow2 is equal to 0, if Flow2 is a non-empty Flow, then as shown in fig. 6, Flow2 may be selected as the target Flow, IO request b1 and IO request b2 to Flow2 in the queue are dequeued, and VDSlice of Flow2 is updated to 1 (i.e.,
Figure BDA0003576603400000082
)。
thereafter, in the case where VDSlice of flows 1, 2, and 3 is equal to 1, any one of flows 1, 2, and 3 that is not empty can be selected as the target Flow, and assuming that Flow2 is selected as the target Flow, IO request b3 of Flow2 in the queue can be dequeued and VDSlice of Flow2 can be updated to 1.5 (i.e.,
Figure BDA0003576603400000083
)。
thereafter, in the case where VDSlice of flows 1 and 3 is equal to 1 and VDSlice of Flow2 is equal to 1.5, any one of flows 1 and 3 that is not empty may be selected as the target Flow, and assuming that Flow3 is selected as the target Flow, IO request c4 and IO request c5 of Flow3 in the queue may be sequentially dequeued and VDSlice of Flow3 may be updated to 1.66 as shown in fig. 6 (i.e.,
Figure BDA0003576603400000084
)。
thereafter, in the case where VDSlice of Flow1 is equal to 1, VDSlice of Flow2 is equal to 1.5, VDSlice of Flow3 is equal to 1.66, if Flow1 is a non-empty Flow, Flow1 may be selected as the target Flow, the IO request a2 of Flow1 in the queue is dequeued, and VDSlice of Flow1 is updated to 2 (i.e., VDSlice is 1+1), as shown in fig. 6.
……
It should be noted that the scheduling process shown in fig. 6 is only an example. The scheduleuch 1 'in fig. 6 represents the first scheduling period, the scheduleuch 2' represents the second scheduling period, and only the first 4 schedules in the second scheduling period are shown in fig. 6.
According to the scheduling method provided by the embodiment of the application, the abstract request stream is used as the scheduling granularity, the request stream is obtained according to the object abstraction, the weight of the request stream is the weight of the corresponding object, the target request stream is selected according to the weight of the request stream, then the target to-be-scheduled request of the target request stream is selected from the queue, and the target to-be-scheduled request is taken out from the queue for executing the target to-be-scheduled request.
Fig. 7 is a schematic structural diagram of a scheduling apparatus according to an embodiment of the present application; referring to fig. 7, the present embodiment provides an apparatus, which may perform the method described above, and in particular, the apparatus may include:
an obtaining module 71, configured to obtain a to-be-scheduled request of an object;
an enqueuing module 72, configured to put the request to be scheduled in a queue at a position corresponding to a request stream, where the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object;
a dequeuing module 73, configured to select a target request stream from the request streams with the requests to be scheduled according to the weight of the request stream, select a target request to be scheduled of the target request stream from the queue, and take out the target request to be scheduled from the queue, so as to execute the target request to be scheduled.
In an embodiment, the dequeuing module 73 is specifically configured to select a target to-be-scheduled request of the target request stream from the queue according to a first-in first-out principle.
In an embodiment, the dequeuing module 73 is specifically configured to select, as a target request stream, a request stream with a smallest scheduling accumulation value from request streams with requests to be scheduled; the scheduling accumulated value of the request flow is positively correlated with the scheduling quantity of the request flow and negatively correlated with the weight of the request flow;
and the dequeue module 73 is further configured to update the scheduling accumulation value of the target request stream.
In an embodiment, the dequeuing module 73 is specifically configured to update the scheduling cumulative value of the target request stream according to the cumulative factor of the target request stream; the magnitude of the cumulative factor for a request stream is inversely related to the weight of the request stream.
In one embodiment, the cumulative factor of the request stream is equal to the ratio of the reference weight to the weight of the request stream.
In an embodiment, the dequeuing module 73 is specifically configured to, at each scheduling, select a target request stream from request streams in which requests to be scheduled exist, select a target request to be scheduled of the target request stream from the queue, and take out the request to be scheduled from the queue.
In an embodiment, the dequeuing module 73 is specifically configured to, during each scheduling, if the continuous scheduling number of the first target request stream corresponding to the previous scheduling in the current round of scheduling reaches the single-round scheduling number threshold of the first target request stream, or if there is no to-be-scheduled request in the first target request stream, select a second target request stream from the request streams in which there are to-be-scheduled requests, select a target to-be-scheduled request of the second target request stream from the queue, and take the to-be-scheduled request out of the queue, otherwise select a target to-be-scheduled request of the first target request stream from the queue, and take the to-be-scheduled request out of the queue.
In one embodiment, the sum of the single-round scheduling number thresholds of all the request flows is equal to the total single-round scheduling number, and the single-round scheduling number threshold of the request flow is positively correlated with the weight of the request flow.
In one embodiment, the single-round scheduling number threshold for a request stream is equal to the ratio of the weight of the request stream to the reference weight, multiplied by the reference number threshold.
In one embodiment, the scheduling number of request streams in the single-round scheduling is equal to the single-round scheduling number threshold of the request streams; and/or in single-round scheduling, the scheduling number of the first request flow is smaller than the threshold value of the single-round scheduling number of the first request flow, and the scheduling number of the second request flow is larger than the threshold value of the single-round scheduling number of the second request flow.
The apparatus shown in fig. 7 can perform the method of the embodiment shown in fig. 4, and reference may be made to the related description of the embodiment shown in fig. 4 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 4, and are not described herein again.
In one possible implementation, the structure of the apparatus shown in FIG. 7 may be implemented as a computer device. As shown in fig. 8, the computer apparatus may include: a processor 81 and a memory 82. Wherein the memory 82 is used for storing programs that support the computer device to execute the methods provided in the above-described embodiment shown in fig. 4, and the processor 81 is configured for executing the programs stored in the memory 82.
The program comprises one or more computer instructions which, when executed by the processor 81, are capable of performing the steps of:
acquiring a request to be scheduled of an object;
putting the request to be scheduled into a corresponding position of a corresponding request stream in a queue to serve as the request to be scheduled of the corresponding request stream, wherein the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object;
according to the weight of the request flow, selecting a target request flow from the request flows with the requests to be scheduled, selecting a target request to be scheduled of the target request flow from the queue, and taking the target request to be scheduled out of the queue for executing the target request to be scheduled.
Optionally, the processor 81 is further configured to perform all or part of the steps of the foregoing embodiment 8 shown in fig. 4.
The computer device may further include a communication interface 83 for the computer device to communicate with other devices or a communication network.
In addition, the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the method described in the above method embodiment is implemented.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement such a technique without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and some contributions to the art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow of requests and/or blocks in the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the block or blocks of the block diagram and/or request flow or flows in the request flow diagram.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the block or blocks of the flowchart and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the requesting flow or flows of the requesting flowchart and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, linked lists, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. A method of scheduling, comprising:
acquiring a to-be-scheduled request of an object;
putting the request to be scheduled into a corresponding position of a corresponding request stream in a queue to serve as the request to be scheduled of the corresponding request stream, wherein the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object;
according to the weight of the request flow, selecting a target request flow from the request flows with the requests to be scheduled, selecting a target request to be scheduled of the target request flow from the queue, and taking the target request to be scheduled out of the queue for executing the target request to be scheduled.
2. The method of claim 1, wherein selecting the target to-be-scheduled request of the target request flow from the queue comprises: and selecting the target to-be-scheduled request of the target request stream from the queue according to a first-in first-out principle.
3. The method of claim 1, wherein selecting a target request stream from the request streams with the requests to be scheduled according to the weights of the request streams comprises:
selecting a request flow with the minimum scheduling accumulation value from the request flows with the requests to be scheduled as a target request flow; the scheduling accumulated value of the request flow is positively correlated with the scheduling quantity of the request flow and negatively correlated with the weight of the request flow;
the method further comprises the following steps: and updating the scheduling accumulated value of the target request flow.
4. The method of claim 3, wherein the updating the scheduling accumulation value of the target request stream comprises: updating the scheduling accumulated value of the target request stream according to the accumulated factor of the target request stream; the magnitude of the cumulative factor for a request stream is inversely related to the weight of the request stream.
5. The method of claim 4, wherein the cumulative factor for the request stream is equal to a ratio of the reference weight to the weight of the request stream.
6. The method according to claim 1, wherein the selecting a target request stream from request streams having requests to be scheduled according to the weight of the request stream, selecting a target request to be scheduled of the target request stream from the queue, and fetching the target request to be scheduled from the queue comprises:
and selecting a target request stream from the request streams with the requests to be scheduled, selecting the target requests to be scheduled of the target request stream from the queue, and taking the requests to be scheduled out of the queue.
7. The method according to claim 1, wherein the selecting a target request stream from request streams having requests to be scheduled according to the weight of the request stream, selecting a target request to be scheduled of the target request stream from the queue, and fetching the target request to be scheduled from the queue comprises:
and during each scheduling, if the continuous scheduling quantity of a first target request flow corresponding to the previous scheduling in the current scheduling reaches the single-round scheduling quantity threshold of the first target request flow, or the first target request flow does not have a request to be scheduled currently, selecting a second target request flow from the request flows with the requests to be scheduled, selecting a target request to be scheduled of the second target request flow from the queue, and taking out the request to be scheduled, otherwise, selecting a target request to be scheduled of the first target request flow from the queue, and taking out the request to be scheduled.
8. The method of claim 1, wherein a sum of the single-round scheduling number thresholds for all request flows is equal to a total single-round scheduling number, and the single-round scheduling number threshold for a request flow is positively correlated to the weight of the request flow.
9. The method of claim 8, wherein the threshold number of single round dispatches for a request stream is equal to a ratio of a weight of the request stream to a baseline weight, multiplied by the threshold number of baseline.
10. The method of claim 8, wherein the scheduled number of request streams in a single round of scheduling is equal to a single round of scheduling number threshold of request streams; and/or in the single-round scheduling, the scheduling number of the first request flow is smaller than the single-round scheduling number threshold of the first request flow, and the scheduling number of the second request flow is larger than the single-round scheduling number threshold of the second request flow.
11. A scheduling apparatus, comprising:
the acquisition module is used for acquiring a to-be-scheduled request of an object;
the enqueuing module is used for putting the request to be scheduled into a corresponding position of a corresponding request stream in a queue to serve as the request to be scheduled of the corresponding request stream, the request stream is obtained according to object abstraction, and the weight of the request stream is the weight of the corresponding object;
and the dequeuing module is used for selecting a target request stream from the request streams with the requests to be scheduled according to the weight of the request stream, selecting the target requests to be scheduled of the target request stream from the queue, and taking the target requests to be scheduled out of the queue for executing the target requests to be scheduled.
12. A computer device, comprising: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of claims 1 to 10.
13. A computer-readable storage medium, having stored thereon a computer program which, when executed, implements the method of any one of claims 1 to 10.
CN202210356760.0A 2022-03-31 2022-03-31 Scheduling method, device and equipment Pending CN114661445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210356760.0A CN114661445A (en) 2022-03-31 2022-03-31 Scheduling method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210356760.0A CN114661445A (en) 2022-03-31 2022-03-31 Scheduling method, device and equipment

Publications (1)

Publication Number Publication Date
CN114661445A true CN114661445A (en) 2022-06-24

Family

ID=82035150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210356760.0A Pending CN114661445A (en) 2022-03-31 2022-03-31 Scheduling method, device and equipment

Country Status (1)

Country Link
CN (1) CN114661445A (en)

Similar Documents

Publication Publication Date Title
US6909691B1 (en) Fairly partitioning resources while limiting the maximum fair share
Sprunt et al. Aperiodic task scheduling for hard-real-time systems
US8504691B1 (en) System and method for allocating resources for heterogeneous service requests
US9270527B2 (en) Methods, systems, and computer readable media for enabling real-time guarantees in publish-subscribe middleware using dynamically reconfigurable networks
Patel et al. Priority based job scheduling techniques in cloud computing: a systematic review
Nan et al. Optimal resource allocation for multimedia cloud in priority service scheme
WO2016074759A1 (en) Method and system for real-time resource consumption control in a distributed computing environment
Ghosh et al. Dynamic time quantum priority based round robin for load balancing in cloud environment
CN115421905A (en) Task scheduling method and device, electronic equipment and storage medium
CN106201681B (en) Method for scheduling task based on pre-release the Resources list under Hadoop platform
CN116149821A (en) Cluster multi-task sliding window scheduling processing method, system, equipment and medium
CN114666284B (en) Flow control method and device, electronic equipment and readable storage medium
Zhang et al. Dynamic scheduling with service curve for QoS guarantee of large-scale cloud storage
Shifrin et al. Optimal control of VNF deployment and scheduling
Stavrinides et al. Security and cost aware scheduling of real-time IoT workflows in a mist computing environment
CN112968845B (en) Bandwidth management method, device, equipment and machine-readable storage medium
CN115766582A (en) Flow control method, device and system, medium and computer equipment
US7385987B1 (en) Scheduling system and method for multi-level class hierarchy
CN114661445A (en) Scheduling method, device and equipment
US7567572B1 (en) 2-rate scheduling based on search trees with configurable excess bandwidth sharing
Nolte et al. Server-based scheduling of the CAN bus
CN115033355A (en) Task scheduling method, electronic device and storage medium
CN115202842A (en) Task scheduling method and device
Sarkar et al. Frame-based proportional round-robin
EP1757036A1 (en) Method and system for scheduling synchronous and asynchronous data packets over the same network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination