CN106484536B - IO scheduling method, device and equipment - Google Patents

IO scheduling method, device and equipment Download PDF

Info

Publication number
CN106484536B
CN106484536B CN201610873332.XA CN201610873332A CN106484536B CN 106484536 B CN106484536 B CN 106484536B CN 201610873332 A CN201610873332 A CN 201610873332A CN 106484536 B CN106484536 B CN 106484536B
Authority
CN
China
Prior art keywords
priority
low
scheduling
condition
wake
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610873332.XA
Other languages
Chinese (zh)
Other versions
CN106484536A (en
Inventor
李明
邱似峰
余利华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Shuzhifan Technology Co ltd
Original Assignee
Hangzhou Langhe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Langhe Technology Co Ltd filed Critical Hangzhou Langhe Technology Co Ltd
Priority to CN201610873332.XA priority Critical patent/CN106484536B/en
Publication of CN106484536A publication Critical patent/CN106484536A/en
Application granted granted Critical
Publication of CN106484536B publication Critical patent/CN106484536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bus Control (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the invention provides an IO scheduling method. The method comprises the following steps: for high priority traffic: receiving a high-priority IO execution request; responding to the current condition that a first low-priority IO awakening condition is met, and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal; and scheduling the high priority IO execution; for low priority traffic: monitoring a low-priority IO wake-up signal; triggering the judgment of the low-priority IO scheduling condition in response to receiving the low-priority IO wake-up signal; and in response to the establishment of the low-priority IO scheduling condition, scheduling the low-priority IO to execute. The low priority IO is not starved while ensuring that the high priority IO is scheduled in time and not blocked by the low priority IO. In addition, the embodiment of the invention provides an IO scheduling device and equipment.

Description

IO scheduling method, device and equipment
Technical Field
The embodiment of the invention relates to the technical field of file systems, in particular to an IO scheduling method, device and equipment.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
The Distributed File System (Distributed File System) is a "Distributed" plus "File System as the name implies, and is still a standard File System for users, and provides various standard APIs for users, and for the File System, the Distributed File System is not only responsible for managing local disks, but also capable of storing File contents and directory structures on remote nodes connected with the local disks through a network.
Hadoop is used as an example for the following description. Hadoop is a distributed system infrastructure. It implements a Distributed File System, i.e. a Hadoop Distributed File System (HDFS). The HDFS has the characteristic of high fault tolerance and can be operated on a general-purpose and cheap machine. The HDFS may perform data checks periodically in the background, for example: cyclic Redundancy Check (CRC). And if the verification is found to be wrong, initiating data recovery and recovering the data to the specified number of copies. As can be seen, in the HDFS, there are multiple types of Input and Output (IO) of services on a storage node, which mainly include:
and (4) normal service: read, write, delete files, etc.;
data verification service: reading file data on a disk to perform CRC (cyclic redundancy check) and determining whether the file data is damaged or not according to a check result;
and (3) data recovery service: the data recovery can be performed in a multi-copy mode, that is, when one disk is damaged, the original file can be copied from the copy file of the damaged file.
Among the above-mentioned various types of service IO, the normal service IO is generally more important than the data verification service and the data recovery service IO. Then, in order to ensure the normal execution of the normal service, the HDFS reduces the impact of the data check service and the data recovery service on the normal service by limiting the IO operation speed of them. That is, assuming that the system is capable of performing 1000 IO operations per second, HDFS limits IO operations for various types of traffic to: the method comprises the steps of executing 200 IO operations of data verification services or data recovery services and 800 IO operations of normal services every second, however, because the normal services do not know existence of the data verification services or the data recovery services, system resources are occupied according to the speed of executing 1000 IO operations every second, and it is not known that 200 IO operations are allocated to the data verification services or the data recovery services in 1000 IO operations every second. As can be seen, although HDFS limits the IO operation speed of the data verification service and the data recovery service, it cannot eliminate the impact of these services on the normal services, and they still block the IO operation of the normal services.
Disclosure of Invention
For the reason that the importance degree of normal service IO and other service IO in the existing file system is different, the prior art reduces the influence of the normal service IO and other service IO by limiting the operation speed of the other service IO.
Therefore, in the prior art, although the existing file systems limit the IO operation speed of other services, they still block the IO operation of normal services, which is a very annoying process.
For this reason, an improved method for scheduling IO is highly needed, so that normal service IO can be executed normally without being blocked by other types of service IO.
In this context, embodiments of the present invention are intended to provide an IO scheduling method, apparatus, and device.
In a first aspect of an embodiment of the present invention, an IO scheduling method is provided, including:
receiving a high-priority IO execution request;
responding to the current condition that a first low-priority IO awakening condition is met, and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal; and
scheduling the high priority IO execution;
wherein the first low priority IO wake-up condition comprises: the number of high priority IOs currently scheduled meets a preset number condition, and there are currently low priority IOs waiting to be executed.
In a possible implementation manner, in the foregoing method provided in this embodiment of the present invention, after scheduling the high-priority IO to execute, the method further includes: triggering and sending a low-priority IO awakening signal in response to the second low-priority IO awakening condition being met currently; wherein the second low priority IO wake-up condition comprises: there are currently no high priority IOs waiting to be executed and there are currently low priority IOs waiting to be executed.
In a second aspect of the embodiments of the present invention, there is provided a second IO scheduling method, including:
monitoring a low-priority IO wake-up signal;
triggering the judgment of the low-priority IO scheduling condition in response to receiving the low-priority IO wake-up signal;
in response to the establishment of the low-priority IO scheduling condition, scheduling the low-priority IO to execute;
wherein the low priority IO scheduling condition includes: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and
the number of low priority IO currently being executed does not reach the preset parallel number.
In a possible implementation manner, the method provided by an embodiment of the present invention further includes: and triggering the judgment of the low-priority IO scheduling condition in response to receiving the low-priority IO execution request.
In a possible implementation manner, in the foregoing method provided in this embodiment of the present invention, after scheduling the low-priority IO to execute, the method further includes: and responding to the current existence of the low-priority IO waiting to be executed, and triggering to send a low-priority IO awakening signal.
In a possible implementation manner, the method provided by an embodiment of the present invention further includes: and responding to the fact that the low-priority IO scheduling condition is established, and if the number of the currently executed passes is larger than zero, reducing the number of the currently executed passes by one.
In a possible implementation manner, the method provided by an embodiment of the present invention further includes: and responding to the condition that the low-priority IO dispatching condition is not established, and continuing monitoring the low-priority IO wake-up signal.
In a third aspect of the embodiments of the present invention, there is provided an IO scheduling apparatus, including:
the receiving module is used for receiving a high-priority IO execution request;
the wake-up module is used for responding to the current wake-up condition meeting the first low-priority IO and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal;
the scheduling module is used for scheduling the high-priority IO execution; wherein the first low priority IO wake-up condition comprises: the number of high priority IOs currently scheduled meets a preset number condition, and there are currently low priority IOs waiting to be executed.
In a possible implementation manner, in the apparatus provided in this embodiment of the present invention, the wakeup module is further configured to trigger sending of a low priority IO wakeup signal in response to that a second low priority IO wakeup condition is currently met after the scheduling module schedules the high priority IO to execute; wherein the second low priority IO wake-up condition comprises: there are currently no high priority IOs waiting to be executed and there are currently low priority IOs waiting to be executed.
In a fourth aspect of the embodiments of the present invention, there is provided a second IO scheduling apparatus, including:
the monitoring module is used for monitoring the low-priority IO wake-up signal;
the judging module is used for responding to the received low-priority IO wake-up signal and triggering the judgment of the low-priority IO scheduling condition;
the scheduling module is used for responding to the establishment of the low-priority IO scheduling condition and scheduling the low-priority IO to be executed; wherein the low priority IO scheduling condition includes: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and the number of the currently executed low-priority IOs does not reach the preset parallel number.
In a possible implementation manner, in the apparatus provided in this embodiment of the present invention, the determining module is further configured to trigger the determination of the low-priority IO scheduling condition in response to receiving a low-priority IO execution request.
In a possible implementation manner, the above apparatus provided in an embodiment of the present invention further includes: a wake-up module; and the awakening module is used for triggering and sending a low-priority IO awakening signal in response to the current low-priority IO waiting to be executed after the scheduling module schedules the low-priority IO for execution.
In a possible implementation manner, the above apparatus provided in an embodiment of the present invention further includes: a statistical module; and the counting module is used for responding to the fact that the low-priority IO dispatching condition is established, and if the number of the currently executed pass is larger than zero, reducing the number of the currently executed pass by one.
In a possible implementation manner, in the apparatus provided in this embodiment of the present invention, the monitoring module is further configured to continue monitoring the low-priority IO wakeup signal in response to that the low-priority IO scheduling condition is not satisfied.
In a fifth aspect of the embodiments of the present invention, there is provided an IO scheduling apparatus, including:
the high-priority IO scheduling apparatus provided in any one of the above possible embodiments and the low-priority IO scheduling apparatus provided in any one of the above possible embodiments.
According to the IO scheduling method, the IO scheduling device and the IO scheduling equipment, priorities can be divided for IOs of different types of services in a file system, when a high-priority IO execution request is received, the high-priority IO execution is directly scheduled, each time a preset execution number of high-priority IOs are executed, if a low-priority IO waiting for execution currently exists, an execution pass is issued for the low-priority IO, a low-priority IO awakening signal is sent to trigger the judgment of a low-priority IO scheduling condition, and if the low-priority IO scheduling condition is met, the low-priority IO execution is scheduled. Therefore, the low-priority IO can not be starved while the high-priority IO is timely scheduled. Compared with the prior art, in which the HDFS reduces the influence of the data verification service and the data recovery service IO on the normal service by limiting the operating speed of the data verification service and the data recovery service IO, in the embodiments provided by the present application, the normal service IO can be regarded as a high priority IO, the data verification service and the data recovery service IO can be regarded as a low priority IO, the normal service IO is directly executed when receiving a normal service IO request, and when the scheduled normal service IO meets a preset quantity condition and there is a data verification service and a data recovery service IO waiting to be executed, an execution pass is issued for the data verification service and the data recovery service IO, so that the data verification service and the data recovery service IO are not starved, and when it is ensured that no normal service IO waits to be executed, the data verification service and the data recovery service can independently occupy system resources to perform IO operation, so as to increase the execution speed of the IO, and better experience is brought to the user.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIGS. 1 a-1 b schematically illustrate application scenarios according to embodiments of the present invention;
FIG. 2 is a schematic flow chart illustrating an IO scheduling method for high priority IO according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating an IO scheduling method for low priority IO according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating an IO scheduling method for high priority IOs according to another embodiment of the present invention;
FIG. 5 is a flow chart of an IO scheduling method for low priority IO according to another embodiment of the present invention;
fig. 6 is a schematic structural diagram of one of IO scheduling apparatuses according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a second IO scheduler according to an embodiment of the invention;
fig. 8 is a schematic structural diagram illustrating an IO scheduling device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram illustrating an IO scheduling device according to another embodiment of the present invention;
fig. 10 schematically shows a program product diagram of an IO scheduling device according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to the embodiment of the invention, an IO scheduling method, an IO scheduling device and IO scheduling equipment are provided.
Moreover, any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Summary of The Invention
The inventor finds that, in the prior art, the influence of non-main service IO on main service IO is reduced by limiting the operation speed of non-main service IO in the prior art, so that although the operation speed of non-main service IO is limited to a certain extent, the operation of main service IO is still blocked. In the prior art, an improved IO scheduling method is lacked, so that a main service IO can be normally executed and cannot be blocked by a non-main service IO.
Therefore, the invention provides an IO scheduling method, an IO scheduling device and an IO scheduling device, wherein the IO scheduling method comprises the following steps: for high priority traffic: receiving a high-priority IO execution request; responding to the current condition that a first low-priority IO awakening condition is met, and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal; and scheduling the high priority IO execution; for low priority traffic: monitoring a low-priority IO wake-up signal; triggering the judgment of the low-priority IO scheduling condition in response to receiving the low-priority IO wake-up signal; in response to the establishment of the low-priority IO scheduling condition, scheduling the low-priority IO to execute; wherein the first low priority IO wake-up condition comprises: the number of the high-priority IO which is scheduled currently is integral multiple of the preset execution number, and the low-priority IO which is waiting to be executed currently exists; the low priority IO scheduling conditions include: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and the number of the currently executed low-priority IOs does not reach the preset parallel number.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Application scene overview
Referring to fig. 1a to 1b, in fig. 1a, a file system runs on a storage device 101, and the storage device 101 can perform IO scheduling for a service according to a priority of the service executing an IO operation; the storage devices 102 to 104 are connected via a network to form a distributed file system (which is only an example, and does not limit the number of storage devices forming the file system), and the storage devices 102 to 104 can schedule services running by themselves according to their priorities. The network can be a local area network, a wide area network, a mobile internet and the like; the storage devices 101 to 104 may be portable devices (e.g., mobile phones, tablet computers, notebook computers, etc.) or Personal Computers (PCs).
Exemplary method
The following describes a method for IO scheduling according to an exemplary embodiment of the present invention with reference to fig. 2 to 5 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present invention, and the embodiments of the present invention are not limited in this respect. Rather, embodiments of the present invention may be applied to any scenario where applicable.
Fig. 2 is a flowchart illustrating an embodiment of an IO scheduling method provided by the present invention, and in the embodiment of the IO scheduling method provided by the present invention, IO scheduling for a high priority service is performed. The execution subject may be the storage device 101 to the storage device 104 in the application scenario. A flow of an IO scheduling method according to an embodiment of the present invention is described below with reference to the drawing.
As shown in fig. 2, an IO scheduling method provided in the embodiment of the present invention includes, for a high priority service, the following steps:
s201, receiving a high-priority IO execution request.
S202, responding to the fact that a first low-priority IO awakening condition is met at present, and triggering to issue an execution pass for the low-priority IO; and sends a low priority IO wake-up signal.
Wherein the first low priority IO wake-up condition comprises: the number of high priority IOs currently scheduled meets a preset number condition, and there are currently low priority IOs waiting to be executed.
And S203, scheduling the high-priority IO execution in the S201.
Further, the execution subject of this embodiment may be the high-priority service itself running in the storage device 101 to the storage device 104, or a module dedicated to scheduling.
Step S202 and step S203 are not performed in strict order.
In this embodiment, when a high priority IO execution request is received, the high priority IO execution is directly scheduled, the first low priority IO wakeup condition is determined, and if the first low priority IO wakeup condition is met, a wakeup operation for the low priority IO is triggered. Not only ensures the timely scheduling of the high-priority IO, but also can not starve the low-priority IO.
Corresponding to the method shown in fig. 2, an IO scheduling method is further provided in an embodiment of the present invention for a low priority service waiting for a low priority IO wakeup signal. The execution subject may be the storage device 101 to the storage device 104 in the application scenario. A flow of an IO scheduling method according to an embodiment of the present invention is described below with reference to the drawing.
As shown in fig. 3, an IO scheduling method provided in an embodiment of the present invention, for a low-priority service, an execution subject may be a storage device 101 to a storage device 104 in an application scenario, including the following steps:
s301, monitoring the low-priority IO wake-up signal.
S302, in response to receiving the low-priority IO wake-up signal, triggering judgment of the low-priority IO scheduling condition.
And S303, responding to the establishment of the low-priority IO scheduling condition, and scheduling the low-priority IO to execute.
Wherein, the low priority IO scheduling condition comprises: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and the number of the currently executed low-priority IOs does not reach the preset parallel number.
In the embodiment of the invention, an independent IO scheduling thread can be started to schedule the IO of the high-priority service and the low-priority service, and the high-priority service and the low-priority service can also schedule the respective IO. Aiming at the condition that the high-priority service and the low-priority service schedule respective IO, the high-priority service can execute the IO when generating an IO execution request and judge whether to awaken the low-priority IO; the low-priority service can monitor the low-priority wake-up signal, judge the low-priority IO scheduling condition when receiving the low-priority wake-up signal, and execute the low-priority IO according to the judgment result, and the implementation mode does not need to establish an independent IO scheduling thread and is low in resource occupation.
Fig. 4 is a flowchart illustrating an IO scheduling method according to another embodiment of the present invention, where the IO scheduling method according to another embodiment of the present invention is directed to IO scheduling for a high priority service. The execution subject may be the storage device 101 to the storage device 104 in the application scenario. A flow of an IO scheduling method according to an embodiment of the present invention is described below with reference to the drawing.
As shown in fig. 4, an IO scheduling method provided in the embodiment of the present invention includes, for a high priority service, the following steps:
s401, receiving a high-priority IO execution request.
S402, responding to the fact that a first low-priority IO awakening condition is met at present, and triggering to issue an execution pass for the low-priority IO; and sends a low priority IO wake-up signal.
Wherein the first low priority IO wake-up condition comprises: the number of high priority IOs currently scheduled meets a preset number condition, and there are currently low priority IOs waiting to be executed.
In this step, when a high priority IO execution request is received, the first low priority IO wakeup condition may be determined, that is, whether the number of currently scheduled high priority IOs meets a preset number condition and whether there is a low priority IO waiting to be executed currently. Preferably, the condition that the preset number is satisfied may be: the number of the executed low-priority IO is an integral multiple of the preset number, that is, in order to prevent starvation of the low-priority IO, each time the high-priority IO with the preset number of executed low-priority IO is executed, if there is a low-priority IO waiting to be executed currently, the low-priority IO is awakened.
In this embodiment, the high priority IO may have multiple states: a scheduled state, an executing state, and a wait for execution state. Low priority IO is similarly described and not further described herein.
In this step, if the first low-priority IO wakeup condition is not satisfied currently, the execution pass is not issued for the low-priority IO, and the low-priority IO wakeup signal is not sent, and step S403 is directly executed.
And S403, scheduling the high-priority IO execution in S401.
The execution of step S402 and step S403 is not in strict order of precedence.
S404, responding to the fact that a second low-priority IO awakening condition is met currently, and triggering and sending a low-priority IO awakening signal.
Wherein the second low priority IO wake-up condition comprises: there are currently no high priority IOs waiting to be executed and there are currently low priority IOs waiting to be executed.
In this step, after the high priority IO is executed, the second low priority IO wakeup condition may be further determined, and if the second low priority IO wakeup condition is met, the low priority IO wakeup signal is sent again. Therefore, the corresponding conditions can be judged before and after the high-priority IO is executed so as to trigger the awakening of the low-priority IO, the opportunity that the low-priority IO is scheduled is increased on the premise that the high-priority IO is executed in time, the low-priority IO is further prevented from being starved, and the high-priority IO and the low-priority IO are scheduled more reasonably.
In this step, if the second low-priority IO wakeup condition is not satisfied currently, the low-priority IO wakeup signal is not sent.
Corresponding to the method shown in fig. 4, another embodiment of the present invention further provides an IO scheduling method for a low-priority service. The execution subject may be the storage device 101 to the storage device 104 in the application scenario. A flow of an IO scheduling method according to an embodiment of the present invention is described below with reference to the drawing.
As shown in fig. 5, an IO scheduling method provided in an embodiment of the present invention, for a low-priority service, an execution subject may be a storage device 101 to a storage device 104 in an application scenario, including the following steps:
s501, monitoring the low-priority IO wake-up signal.
S502, judging whether a low-priority IO wake-up signal is received, if so, entering a step S503; otherwise, go to step S501;
s503, judging whether a low-priority IO scheduling condition is met, and if so, entering a step S504; otherwise, the process proceeds to step S501.
In this step, in response to receiving the low-priority IO wakeup signal, the judgment of the low-priority IO scheduling condition is triggered.
Wherein, the low priority IO scheduling condition comprises: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and
the number of low priority IO currently being executed does not reach the preset parallel number.
As can be seen from the above embodiments and this embodiment, the low priority IO wake-up signal may be sent at multiple locations, but after receiving the low priority IO wake-up signal, the low priority IO is not triggered to be executed immediately, but the low priority IO scheduling condition is determined first. In the low-priority IO scheduling condition, an execution pass is set, and when the execution pass receives a high-priority IO execution request and currently meets a first low-priority IO wakeup condition, an execution pass is triggered to be issued to the low-priority IO, that is, although the low-priority IO has a chance to be woken up for many times, if the low-priority IO is scheduled to be executed before the high-priority IO waiting for execution exists, it is still necessary to ensure that the low-priority IO can be executed only after a certain number of high-priority IOs are executed according to the number of the execution pass, thereby ensuring normal execution of the high-priority IO.
In this embodiment, the parallel execution number of the low-priority IO may be set, and the parallel execution of the low-priority IO is limited within the preset parallel number, so as to prevent a large number of low-priority IO from being accumulated on the disk device and affecting the execution of the high-priority IO.
S504, if the number of the currently executed passes is larger than zero, the number of the currently executed passes is reduced by one.
And S505, scheduling the low-priority IO to execute.
Steps S504 to S505 are executed in response to the satisfaction of the low-priority IO scheduling condition. The two are not performed in strict sequence.
S506, responding to the current low-priority IO waiting to be executed, and triggering and sending a low-priority IO awakening signal. Step 501 is entered.
In this step, after the low priority IO is scheduled to be executed, the low priority IO wakeup signal is sent again by determining whether there is a low priority IO waiting to be executed currently.
S507, receiving the low priority IO execution request, the process proceeds to step S503.
In this embodiment, two preconditions exist for judging the low-priority IO scheduling condition, one is to receive a low-priority IO wakeup signal, and the other is to trigger the judgment of the low-priority IO scheduling condition in response to receiving a low-priority IO execution request.
The steps are not executed in a strict order of execution.
In the embodiment of the present invention, unlike the prior art, all resources can be used without being limited when executing low-priority IO, thereby speeding up the execution of low-priority IO.
The following provides a program execution flow according to an IO scheduling method provided in an embodiment of the present invention, where the following program execution flow is only one implementation manner of the embodiment of the present invention and is not intended to limit the present invention:
in the steps S401 to S402, the function 1: a HighIOIn () implementation, which is executed before the high priority IO is executed;
for the above step S404, by the function 2: a HighIOOut () implementation, which is executed after the high priority IO is executed;
in the steps S501 to S504, by the function 3: LowIOIn () implementation, which function is executed before low priority IO execution;
for the above step S506, by the function 4: LowIOOut () implementation, which is executed after low priority IO execution.
Table one defines the parameters involved in the program flow:
Figure BDA0001125676260000131
the following describes the implementation flow of the functions 1 to 4:
function 1: the HighIOIn () flow is as follows:
1. locking device
nrHighIOPending plus 1
Sn plus 1
4. If the result of the sn's complementation with highIONumber is 0, and nrLowIOWaiting
If not, the following steps are executed:
a) ticket plus 1
b) Signal sending Low priority Wake-Up Signal
5. Unlocking of
Description of the drawings: the function 1 realizes that after receiving the high-priority IO execution request and before executing the high-priority IO, the function triggers the issuing of the execution pass for the low-priority IO through the judgment of the first low-priority IO awakening condition, and sends the low-priority IO awakening signal. Here, the preset number condition employs: the number of high priority IO scheduled currently is an integer multiple of the preset execution number.
Function 2: the HighIOOut () flow is as follows:
1. locking device
nrHighIOPending minus 1
3. If nrHighIOPending is 0 and nrlowiowaitingis not 0, then the following steps are performed:
a) signal sending Low priority Wake-Up Signal
4. Unlocking of
Description of the drawings: the function 2 realizes that after the high-priority IO is executed, the low-priority IO wake-up signal is triggered and sent through the judgment of the second low-priority IO wake-up condition.
Function 3: the LowIOIn () flow is as follows:
1. locking device
nrLowIO plus 1
3.hasAdd=False
4. If the low-priority IO scheduling condition is not satisfied, namely nrHighIOPending is 0 or ticket is not 0, and (nrLowIO-nrLowIOWaiting) < maxLowIONumber, performing the steps a) and b), if the low-priority IO scheduling condition is satisfied, performing the steps 5-7:
a) if hasAdd ═ False then perform:
nrLowIOWaiting plus 1
ii.hasAdd=True
b) Wait for low priority wake-up signal
5. If hasAdd ═ True, then nrlowiowaitingminus 1 is performed
6. If ticket is not 0, ticket minus 1 is performed
7. Unlocking of
Description of the drawings: and the function 3 realizes waiting for the low-priority wake-up signal, judges the low-priority IO scheduling condition when receiving the low-priority wake-up signal, determines whether to execute the low-priority IO according to a judgment result, and continues waiting for the low-priority wake-up signal if the low-priority IO scheduling condition is not met.
Function 4: LowIOOut () flow is as follows:
1. locking device
nrLowIO minus 1
3. If nrLowIOWaiting is not 0, the following steps are performed:
a) signal sending Low priority Wake-Up Signal
4. Unlocking of
Description of the drawings: function 4 enables sending a low priority IO wake-up signal if there is a low priority IO waiting to be executed after executing the low priority IO.
Exemplary device
Having introduced the method of an exemplary embodiment of the present invention, one of the apparatuses for IO scheduling of an exemplary embodiment of the present invention is described next with reference to fig. 6.
Fig. 6 is a schematic structural diagram of an IO scheduling apparatus according to an embodiment of the present invention, and as shown in fig. 6, the IO scheduling apparatus may include the following modules:
a receiving module 601, configured to receive a high-priority IO execution request;
a wakeup module 602, configured to trigger issuing of an execution pass for a low priority IO in response to a first low priority IO wakeup condition being currently met; and sending a low-priority IO wake-up signal;
a scheduling module 603, configured to schedule the high-priority IO for execution; wherein the first low priority IO wake-up condition comprises: the number of high priority IOs currently scheduled meets a preset number condition, and there are currently low priority IOs waiting to be executed.
In some embodiments of this embodiment, optionally, the wakeup module 602 is further configured to trigger sending of a low priority IO wakeup signal in response to that a second low priority IO wakeup condition is currently met after the scheduling module 603 schedules the high priority IO to execute; wherein the second low priority IO wake-up condition comprises: there are currently no high priority IOs waiting to be executed and there are currently low priority IOs waiting to be executed.
Next, a second apparatus for IO scheduling according to an exemplary embodiment of the present invention will be described with reference to fig. 7.
Fig. 7 is a schematic structural diagram of a second IO scheduling apparatus according to an embodiment of the present invention, as shown in fig. 7, the second IO scheduling apparatus may include the following modules:
a monitoring module 701, configured to monitor a low-priority IO wake-up signal;
a judging module 702, configured to trigger a judgment on a low-priority IO scheduling condition in response to receiving a low-priority IO wake-up signal;
a scheduling module 703, configured to schedule the low-priority IO to execute in response to that the low-priority IO scheduling condition is satisfied; wherein the low priority IO scheduling condition includes: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and the number of the currently executed low-priority IOs does not reach the preset parallel number.
In some embodiments of this embodiment, optionally, the determining module 702 is further configured to trigger the determination of the low-priority IO scheduling condition in response to receiving a low-priority IO execution request.
In other embodiments of this embodiment, optionally, the apparatus further includes: a wake-up module 704;
the wakeup module 704 is configured to trigger sending of a low priority IO wakeup signal in response to a current low priority IO waiting to be executed after the scheduling module 703 schedules the low priority IO for execution.
In some further embodiments of this embodiment, optionally, the apparatus further includes: a statistics module 705;
the counting module 705 is configured to respond to that the low-priority IO scheduling condition is satisfied, and if the number of currently executed passes is greater than zero, reduce the number of currently executed passes by one.
In still other embodiments of this embodiment, optionally, the monitoring module 701 is further configured to continue monitoring the low-priority IO wakeup signal in response to that the low-priority IO scheduling condition is not satisfied.
Next, a device 80 for IO scheduling according to an exemplary embodiment of the present invention is described with reference to fig. 8. As shown in fig. 8, an IO scheduling apparatus 80 according to an embodiment of the present invention includes one of the IO scheduling devices 801 according to any of the foregoing embodiments, and a second IO scheduling device 802 according to any of the foregoing embodiments.
Having described the method and apparatus of exemplary embodiments of the present invention, an apparatus for IO scheduling according to yet another exemplary embodiment of the present invention is described next.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, an apparatus for IO scheduling according to the present invention may include at least one processing unit, and at least one storage unit. Wherein the storage unit stores program code which, when executed by the processing unit, causes the processing unit to perform the steps for use in the IO scheduling method according to various exemplary embodiments of the present invention described in the above section "exemplary method" of this specification. For example, the processing unit may execute step S201 as shown in fig. 2, receive a high priority IO execution request; step S202, responding to the fact that a first low-priority IO awakening condition is met currently, and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal; step S203, high priority IO execution is scheduled in S201. And monitoring a low priority IO wake-up signal as shown in step S301 in fig. 3; step S302, responding to the received low-priority IO wake-up signal, and triggering the judgment of the low-priority IO scheduling condition; and step S303, responding to the establishment of the low-priority IO scheduling condition, and scheduling the low-priority IO to be executed.
The apparatus 90 for IO scheduling according to this embodiment of the present invention is described below with reference to fig. 9. The device 90 for IO scheduling shown in fig. 9 is only an example, and should not bring any limitation to the function and the scope of the embodiments of the present invention.
As shown in fig. 9, the device 90 for IO scheduling is in the form of a general purpose computing device. The components of device 90 for IO scheduling may include, but are not limited to: the at least one processing unit 901, the at least one memory unit 902, and the bus 903 connecting the various system components (including the processing unit 901 and the memory unit 902).
Bus 903 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures.
The storage unit 902 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)9021 and/or cache memory 9022, and may further include Read Only Memory (ROM) 9023.
Storage unit 902 may also include a program/utility 900 having a set (at least one) of program modules 9024, such program modules 9024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The device for IO scheduling 90 may also communicate with one or more external devices 904 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the device for IO scheduling 90, and/or with any devices (e.g., router, modem, etc.) that enable the device for IO scheduling 90 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 905. Also, device for IO scheduling 90 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through network adapter 906. As shown, network adapter 906 communicates with the other modules of device 90 for IO scheduling via bus 903. The device 90 for IO scheduling may also display the scheduling result to the user through the display unit 907. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the device 90 for IO scheduling, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Exemplary program product
In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform steps in the method for IO scheduling according to various exemplary embodiments of the present invention described in the section "exemplary method" above in this specification when the program product is run on the terminal device, for example, the terminal device may perform step S201, receiving a high priority IO execution request as shown in fig. 2; step S202, responding to the fact that a first low-priority IO awakening condition is met currently, and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal; step S203, high priority IO execution is scheduled in S201. And monitoring a low priority IO wake-up signal as shown in step S301 in fig. 3; step S302, responding to the received low-priority IO wake-up signal, and triggering the judgment of the low-priority IO scheduling condition; and step S303, responding to the establishment of the low-priority IO scheduling condition, and scheduling the low-priority IO to be executed.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 10, a program product 100 for IO scheduling according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although in the above detailed description several means or sub-means of IO scheduled devices are mentioned, this division is only not mandatory. Indeed, the features and functions of two or more of the devices described above may be embodied in one device, according to embodiments of the invention. Conversely, the features and functions of one apparatus described above may be further divided into embodiments by a plurality of apparatuses.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (11)

1. An Input Output (IO) scheduling method, comprising:
receiving a high-priority IO execution request;
responding to the current condition that a first low-priority IO awakening condition is met, and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal; and
scheduling the high priority IO execution;
wherein the first low priority IO wake-up condition comprises: the number of high-priority IOs which are scheduled currently meets a preset number condition, and low-priority IOs which are waiting to be executed currently exist;
after scheduling the high priority IO to execute, further comprising:
triggering and sending a low-priority IO awakening signal in response to the second low-priority IO awakening condition being met currently;
wherein the second low priority IO wake-up condition comprises: there are currently no high priority IOs waiting to be executed and there are currently low priority IOs waiting to be executed.
2. An Input Output (IO) scheduling method, comprising:
monitoring a low-priority IO wake-up signal;
triggering the judgment of the low-priority IO scheduling condition in response to receiving the low-priority IO wake-up signal;
in response to the establishment of the low-priority IO scheduling condition, scheduling the low-priority IO to execute;
wherein the low priority IO scheduling condition includes: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and
the number of the low-priority IO currently being executed does not reach the preset parallel number;
after scheduling the low-priority IO to execute, the method further includes:
and responding to the current existence of the low-priority IO waiting to be executed, and triggering to send a low-priority IO awakening signal.
3. The method of claim 2, further comprising:
and triggering the judgment of the low-priority IO scheduling condition in response to receiving the low-priority IO execution request.
4. The method of claim 2 or 3, further comprising:
and responding to the fact that the low-priority IO scheduling condition is established, and if the number of the currently executed passes is larger than zero, reducing the number of the currently executed passes by one.
5. The method of claim 2 or 3, further comprising:
and responding to the condition that the low-priority IO dispatching condition is not established, and continuing monitoring the low-priority IO wake-up signal.
6. An Input Output (IO) scheduling apparatus, comprising:
the receiving module is used for receiving a high-priority IO execution request;
the wake-up module is used for responding to the current wake-up condition meeting the first low-priority IO and triggering to issue an execution pass for the low-priority IO; and sending a low-priority IO wake-up signal;
the scheduling module is used for scheduling the high-priority IO execution; wherein the first low priority IO wake-up condition comprises: the number of high-priority IOs which are scheduled currently meets a preset number condition, and low-priority IOs which are waiting to be executed currently exist;
the wake-up module is further configured to:
after the scheduling module schedules the high-priority IO to execute, responding to the current condition of meeting a second low-priority IO awakening condition, and triggering and sending a low-priority IO awakening signal; wherein the second low priority IO wake-up condition comprises: there are currently no high priority IOs waiting to be executed and there are currently low priority IOs waiting to be executed.
7. An Input Output (IO) scheduling apparatus, comprising:
the monitoring module is used for monitoring the low-priority IO wake-up signal;
the judging module is used for responding to the received low-priority IO wake-up signal and triggering the judgment of the low-priority IO scheduling condition;
the scheduling module is used for responding to the establishment of the low-priority IO scheduling condition and scheduling the low-priority IO to be executed; wherein the low priority IO scheduling condition includes: currently, there is no high-priority IO waiting for execution, or the number of currently executed passes is nonzero; and the number of the currently executed low-priority IOs does not reach the preset parallel number;
wherein the apparatus further comprises: a wake-up module;
and the awakening module is used for triggering and sending a low-priority IO awakening signal in response to the current low-priority IO waiting to be executed after the scheduling module schedules the low-priority IO for execution.
8. The apparatus of claim 7, the determining module further configured to:
and triggering the judgment of the low-priority IO scheduling condition in response to receiving the low-priority IO execution request.
9. The apparatus of claim 7 or 8, further comprising: a statistical module;
and the counting module is used for responding to the fact that the low-priority IO dispatching condition is established, and if the number of the currently executed pass is larger than zero, reducing the number of the currently executed pass by one.
10. The apparatus of claim 7 or 8, the listening module further to:
and responding to the condition that the low-priority IO dispatching condition is not established, and continuing monitoring the low-priority IO wake-up signal.
11. An Input Output (IO) scheduling apparatus, comprising: a high priority IO scheduler according to claim 6 and a low priority IO scheduler according to any of claims 7 to 10.
CN201610873332.XA 2016-09-30 2016-09-30 IO scheduling method, device and equipment Active CN106484536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610873332.XA CN106484536B (en) 2016-09-30 2016-09-30 IO scheduling method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610873332.XA CN106484536B (en) 2016-09-30 2016-09-30 IO scheduling method, device and equipment

Publications (2)

Publication Number Publication Date
CN106484536A CN106484536A (en) 2017-03-08
CN106484536B true CN106484536B (en) 2020-04-03

Family

ID=58268367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610873332.XA Active CN106484536B (en) 2016-09-30 2016-09-30 IO scheduling method, device and equipment

Country Status (1)

Country Link
CN (1) CN106484536B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955522B (en) * 2019-11-12 2022-10-14 华中科技大学 Resource management method and system for coordination performance isolation and data recovery optimization

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103348314A (en) * 2010-09-15 2013-10-09 净睿存储股份有限公司 Scheduling of I/O in SSD environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103348314A (en) * 2010-09-15 2013-10-09 净睿存储股份有限公司 Scheduling of I/O in SSD environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Exploiting Parallelism in I/O Scheduling for Access Conflict Minimization in Flash-based Solid State Drives;Congming Gao等;《2014 30th Symposium on Mass Storage Systems and Technologies(MSST)》;20140606;第1-11页 *
基于光纤通道的磁盘阵列系统的研究与设计;李明;《中国优秀硕士学位论文全文数据库-信息科技辑》;20080315(第03期);第I137-14页:摘要,正文第4.5小节 *
海量数据环境中副本动态一致性策略研究;岑文峰;《中国优秀硕士学位论文全文数据库-信息科技辑》;20130715(第07期);第I138-1471页:摘要,正文第2.2小节 *

Also Published As

Publication number Publication date
CN106484536A (en) 2017-03-08

Similar Documents

Publication Publication Date Title
US11294714B2 (en) Method and apparatus for scheduling task, device and medium
US8756613B2 (en) Scalable, parallel processing of messages while enforcing custom sequencing criteria
CN111950988B (en) Distributed workflow scheduling method and device, storage medium and electronic equipment
EP2701074A1 (en) Method, device, and system for performing scheduling in multi-processor core system
CN103593234A (en) Adaptive process importance
CN110413822B (en) Offline image structured analysis method, device and system and storage medium
CN110851276A (en) Service request processing method, device, server and storage medium
CN112346834A (en) Database request processing method and device, electronic equipment and medium
CN110633046A (en) Storage method and device of distributed system, storage equipment and storage medium
US10318456B2 (en) Validation of correctness of interrupt triggers and delivery
CN114928579A (en) Data processing method and device, computer equipment and storage medium
CN116627333A (en) Log caching method and device, electronic equipment and computer readable storage medium
CN111666167A (en) Input event reading processing optimization method, nonvolatile memory and terminal equipment
CN112395097A (en) Message processing method, device, equipment and storage medium
US20100269119A1 (en) Event-based dynamic resource provisioning
CN113806097A (en) Data processing method and device, electronic equipment and storage medium
CN106484536B (en) IO scheduling method, device and equipment
CN116521639A (en) Log data processing method, electronic equipment and computer readable medium
CN116089049B (en) Asynchronous parallel I/O request-based process synchronous scheduling method, device and equipment
US20230096015A1 (en) Method, electronic deviice, and computer program product for task scheduling
CN117093335A (en) Task scheduling method and device for distributed storage system
US20220276901A1 (en) Batch processing management
CN112596761B (en) Service update release method and device and related equipment
CN111459653B (en) Cluster scheduling method, device and system and electronic equipment
US11210089B2 (en) Vector send operation for message-based communication

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 310052 Room 301, Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou NetEase Shuzhifan Technology Co.,Ltd.

Address before: 310052 Room 301, Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: HANGZHOU LANGHE TECHNOLOGY Ltd.