WO2018049899A1 - 一种队列管理方法及装置 - Google Patents

一种队列管理方法及装置 Download PDF

Info

Publication number
WO2018049899A1
WO2018049899A1 PCT/CN2017/092817 CN2017092817W WO2018049899A1 WO 2018049899 A1 WO2018049899 A1 WO 2018049899A1 CN 2017092817 W CN2017092817 W CN 2017092817W WO 2018049899 A1 WO2018049899 A1 WO 2018049899A1
Authority
WO
WIPO (PCT)
Prior art keywords
nvme
queue
preset
equal
host
Prior art date
Application number
PCT/CN2017/092817
Other languages
English (en)
French (fr)
Inventor
陈俊杰
周超
许利霞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018049899A1 publication Critical patent/WO2018049899A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/1642Handling requests for interconnection or transfer for access to memory bus based on arbitration with request queuing

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a queue management method and apparatus.
  • PCIe Peripheral Component Interconnect Express
  • Solid State Disk Solid State Disk
  • NVMe SSD SSD
  • PCIe Peripheral Component Interconnect Express
  • NVMe is an extensible host-side control interface.
  • Figure 1 shows the structure of the NVMe SSD hardware module.
  • the central processing unit (CPU) in the host uses the root port root port.
  • a PCIe switch (PCIe Switch) is used to expand multiple PCIe downlink ports.
  • Each downstream port can be connected to a PCIe SSD disk (NVMe SSD) using the NVMe protocol to complete the storage space expansion of the CPU.
  • PCIe SSD PCIe SSD disk
  • there is an SSD controller inside each NVMe SSD for parsing the NVMe protocol and processing input/output (I/O).
  • the NVMe queue is divided into two types: the management queue and the I/O queue.
  • the management queue is mainly used for device control and management, such as the creation and deletion of I/O queues; Used for disk access.
  • the SubmissIon Queue (SQ) and the Completion Queue (CQ) of the I/O queue are not fixed one-to-one correspondence. They can be flexibly configured when creating an SQ queue.
  • One SQ can be associated with one CQ or multiple SQs. Corresponds to a CQ.
  • the NVMe driver has been integrated into the standard Linux kernel.
  • SMP Symmatical Multi-Processing
  • the CPU cache cache utilization is improved.
  • Create an I/O SQ and an I/O CQ on each CPU core that is, one SQ corresponds to one CQ
  • Figure 2 shows the NVMe queue model in the NVMe driver.
  • the CPU may include multiple CPU cores (in Figure 2, one Host contains one CPU, one CPU contains three CPU cores as an example), and in Figure 2, between the host Host and the NVMe SSD controller. The information interaction is implemented based on the SQ and CQ in the NVMe queue.
  • the host sends commands (such as I/O requests) through the SQ, and the NVMe SSD controller returns a command response (such as an I/O request response) through the CQ. That is, the I/O data on each CPU core is delivered to the SQ corresponding to the core, and the NVMe SSD controller extracts the I/O data from all the SQs. After the processing is completed, the processing result is written into the corresponding CQ. At the same time, each CQ will be bound with an interrupt. When the processing result is written to the CQ, an interrupt will be generated to the Host to remind the Host to read the processing result. As a result, when the number of CPU cores in the host is fixed, the corresponding SQ and CQ are fixed.
  • the embodiment of the invention provides a queue management method and device, which can solve the problem that the data processing performance of the server system is degraded due to the fixed number of NVMe queues in the server system of the NVMe protocol in the prior art.
  • an embodiment of the present invention provides a queue management method, which is applied to a server system adopting a fast non-volatile storage NVMe protocol, wherein the server system includes a host host end and an NVMe solid state drive SSD control.
  • the queue management method may include:
  • the NVMe queue includes a delivery queue SQ or a completion queue CQ, where the SQ is used to deliver the I/O request of the host to the NVMe SSD controller.
  • the CQ is used to feed back the response of the NVMe SSD controller to the I/O request to the host end; if the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold, increase the NVMe queue.
  • the number of the I/O data of the host is processed by the added NVMe queue.
  • the method further includes: reducing the number of NVMe queues if the occupancy of the NVMe queue is less than or equal to a preset lower threshold.
  • the NVMe queue currently includes M SQs, where the M is an integer greater than 0;
  • the preset upper threshold is a first preset threshold; if the occupancy of the NVMe queue is greater than or equal to a preset upper threshold, the number of NVMe queues is increased, and the host is processed by the added NVMe queue.
  • End I/O data including: adding at least one SQ when the average occupancy of the M SQs is greater than or equal to a first preset threshold, and controlling the NVMe SSD by adding the at least one SQ
  • the device sends an I/O request of the host.
  • the method further includes: binding the added at least one SQ to an existing CQ.
  • the NVMe queue includes M SQs, the M is an integer greater than 0, and the preset lower threshold is a second The preset threshold is set.
  • the number of the NVMe queues is reduced when the occupancy of the NVMe queue is less than or equal to the preset lower threshold, including: the average occupancy of the M SQs is less than or equal to the second pre- In the case of setting a threshold, at least one SQ is deleted.
  • the method before the deleting the at least one SQ, the method further includes: reducing an occupancy of the at least one SQ waiting to be deleted to 0.
  • the method further includes: if an occupancy rate of any one of the M SQs is greater than or equal to a third preset threshold, The issuance of I/O requests by the SQ is prohibited.
  • the method further includes: the usage of the SQ that is sent in the I/O request is prohibited from being less than or equal to In the case of the fourth preset threshold, the issuance of the I/O request by the SQ is resumed.
  • the NVMe queue includes N is an integer that is greater than 0; the preset upper threshold is a fifth preset threshold; and the NVMe queue is added if the occupancy of the NVMe queue is greater than or equal to a preset upper threshold.
  • And processing the I/O data of the host by using the added NVMe queue including: adding at least one CQ if the average occupancy of the N CQs is greater than or equal to a fifth preset threshold, and The response of the I/O request is fed back to the Host by the added at least one CQ.
  • the NVMe queue includes N CQs, the N is an integer greater than 0, and the preset lower threshold is a sixth value. If the number of the NVMe queues is reduced, the average occupancy rate of the N CQs is less than or equal to the number of NVMe queues in the case that the occupancy of the NVMe queue is less than or equal to the preset lower threshold. In the case of the sixth preset threshold, at least one CQ is deleted.
  • the method before the deleting the at least one CQ, the method further includes: reducing the occupancy of the at least one CQ waiting to be deleted to 0.
  • the method further includes: deleting and deleting All the SQs that the at least one CQ performs are bound, and wait for the occupancy rate of all the SQs to drop to 0 before deleting all the SQs.
  • the host end currently includes M SQs and N CQs, and the M The SQ is associated with any one of the N CQs, and the M and the N are both positive integers, and M is greater than or equal to N.
  • the method further includes: receiving the host end I/O request Determining a target SQ from the M SQs according to a preset rule to perform the I/O request, and performing feedback on the response of the I/O request through a CQ bound to the target SQ,
  • the preset rule includes a polling rule or a rule with a low priority.
  • the embodiment of the present invention provides a queue management apparatus, which is applied to a server system that adopts a fast non-volatile storage NVMe protocol, where the server system includes a host host end and an NVMe solid-state drive SSD controller.
  • the device may include: a storage unit and a processing unit;
  • the storage unit is configured to store program code
  • the processing unit is configured to invoke the program code stored by the storage unit to perform the following steps:
  • the NVMe queue includes a delivery queue SQ or Completing the queue CQ, wherein the SQ is used to deliver the I/O request of the host to the NVMe SSD controller, where the CQ is used to target the NVMe SSD controller for the I/O request
  • the response is fed back to the Host; in the case that the occupancy of the NVMe queue is greater than or equal to a preset upper threshold, the number of NVMe queues is increased, and the I/O data of the Host is processed by the added NVMe queue.
  • the processing unit is further configured to: when the occupation rate of the NVMe queue is less than or equal to a preset lower threshold, reduce the number of NVMe queues.
  • the NVMe queue currently includes M SQs, where the M is an integer greater than 0;
  • the preset upper threshold is a first preset threshold;
  • the processing unit is configured to increase the number of NVMe queues and increase the number of NVMe queues when the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold.
  • the processing of the I/O data of the host is specifically: if the average occupancy of the M SQs is greater than or equal to a first preset threshold, adding at least one SQ, and increasing the at least one SQ direction
  • the NVMe SSD controller sends the I/O request of the host.
  • the processing unit is further configured to: bind the added at least one SQ to an existing CQ.
  • the NVMe queue includes M SQs, the M is an integer greater than 0, and the preset lower threshold is a second a preset threshold; the processing unit is configured to reduce the number of NVMe queues when the occupancy of the NVMe queue is less than or equal to a preset lower threshold, specifically: the average occupancy of the M SQs is less than In the case of or equal to the second preset threshold, at least one SQ is deleted.
  • the processing unit before the processing unit is configured to delete the at least one SQ, the processing unit is further configured to: wait for the occupancy of the at least one SQ to be deleted Drop to 0.
  • the processing unit is further configured to: when an occupancy rate of any one of the M SQs is greater than or equal to a third preset threshold The sending of the I/O request by the SQ is prohibited.
  • the processing unit is further configured to: when the SQ that is forbidden to perform an I/O request, the occupancy rate is less than When the fourth preset threshold is equal to or equal to the fourth preset threshold, the issuance of the I/O request by the SQ is resumed.
  • the NVMe queue includes N CQs, the N is an integer greater than 0; and the preset upper threshold is a fifth preset threshold;
  • the processing unit is configured to: when the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold, increase the number of NVMe queues, and process the I/O data of the host by using the added NVMe queue, specifically : average occupancy rate of the N CQs If the value is greater than or equal to the fifth preset threshold, the at least one CQ is added, and the response of the I/O request is fed back to the Host by using the
  • the NVMe queue includes N CQs, the N is an integer greater than 0, and the preset lower threshold is a sixth value. a preset threshold; where the processing unit is configured to reduce the number of NVMe queues when the occupancy rate of the NVMe queue is less than or equal to a preset lower threshold, specifically: an average of the N CQs If the occupancy rate is less than or equal to the sixth preset threshold, at least one CQ is deleted.
  • the processing unit before the processing unit is configured to delete the at least one CQ, the processing unit is further configured to: occupy the occupied at least one CQ The rate is reduced to zero.
  • the processing unit is further configured to: delete and delete The at least one CQ performs all the SQs bound, and waits for the occupancy rate of all the SQs to drop to 0 before deleting all the SQs.
  • the device further includes an input unit; the host end currently includes M SQs and Ns CQ, and the M SQs respectively establish a corresponding binding relationship with any one of the N CQs, the M and the N are both positive integers, and M is greater than or equal to N; unit For receiving an I/O request of the host by using the input unit; selecting a target SQ from the
  • the occupancy rate of the NVMe queue at the host end by monitoring the occupancy rate of the NVMe queue at the host end, if the occupancy rate of the NVMe queue is greater than or equal to the preset upper threshold, the number of NVMe queues is increased, and the I/O of the host is processed by the added NVMe queue. O data.
  • the problem that the data processing performance of the server system is degraded due to the fixed number of NVMe queues in the server system of the NVMe protocol in the prior art can be solved.
  • FIG. 1 is a structural diagram of an NVMe SSD hardware module according to an embodiment of the present invention.
  • FIG. 2 is a model diagram of an NVMe queue in an NVMe driver according to an embodiment of the present invention
  • FIG. 3 is a software architecture diagram of a host end according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a queue management method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of another queue management method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of software of an SQ list according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a software structure of a CQ list according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a queue management apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of another embodiment of a queue management apparatus according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic structural diagram of another queue management apparatus according to an embodiment of the present invention.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the invention.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • SubmissIon Queue (SQ) and CompletIon Queue (CQ) are both First Input First Output (FIFO) pipes for connecting Host and NVMe SSD control.
  • DDR Double Rate Synchronous Dynamic Random Access Memory
  • This memory is divided into memory blocks of equal length, each of which is used to store a constant message (NVMe's send and complete messages are regular).
  • NVMe's send and complete messages are regular.
  • NVMe's send and complete messages are regular.
  • head pointer and a tail pointer for this queue. When the two are equal, the queue is empty. As new messages are added to the queue, the tail pointer keeps moving forward.
  • the memory is fixed, once the pointer moves to the last storage space of the team's memory, and then moves, it needs to loop back to the beginning of the memory. Therefore, the memory is actually used as a loop for reuse.
  • the queue can no longer receive new messages, ie the queue is full.
  • Solid State Disk may include, but is not limited to, various types of non-volatile memory, such as 3-dimensional cross-point memory, flash memory, ferroelectric memory, silicon oxide nitride oxide silicon (SONOS) Memory, polymer memory, nanowires, ferroelectric transistor random access memory (FeTRAM or FeRAM), nanowire or electrically erasable programmable read only memory (EEPROM).
  • non-volatile memory such as 3-dimensional cross-point memory, flash memory, ferroelectric memory, silicon oxide nitride oxide silicon (SONOS) Memory, polymer memory, nanowires, ferroelectric transistor random access memory (FeTRAM or FeRAM), nanowire or electrically erasable programmable read only memory (EEPROM).
  • server may be used interchangeably and may mean, for example, but not limited to, "host computer”, “host device”, “host” , “client devices”, “clients”, “network nodes”, and servers that are “remotely accessed” by "nodes” (for example, via a network connection).
  • Multiple means two or more. "and/or”, describing the association relationship of the associated objects, indicating that there may be three relationships, for example, A and/or B, which may indicate that there are three cases where A exists separately, A and B exist at the same time, and B exists separately.
  • the character "/" generally indicates that the contextual object is an "or" relationship.
  • FIG. 3 is a software architecture diagram of a host end according to an embodiment of the present invention.
  • the software architecture includes multiple CPU cores 001, NVMe drivers 002, and M SQs 003 and N CQs 004 in the CPU.
  • each CPU core in the CPU is regarded as a whole 001 (multiple CPU cores), and SQ or CQ is no longer bound to a single CPU core, and there is no need for a one-to-one correspondence between SQ and CQ.
  • the queue management method in the present invention can be applied to the NVMe driver in the host on FIG. 3, and the software architecture in FIG. 3 is only a preferred implementation manner in the embodiment of the present invention.
  • the software architecture in the medium includes, but is not limited to, the above software architecture.
  • the host provided by the present invention can be applied to a server system adopting a fast non-volatile storage NVMe protocol, and the server system can include a host host to which the queue management method provided by the present invention is applied, and through the NVMe.
  • the specific structure of the server system to which the queue management method provided by the present invention is applied is not limited, and the NVMe SSD controller and the NVMe SSD controller are extended.
  • the host performs information exchange with the NVMe SSD controller through the NVMe queue.
  • FIG. 4 is a schematic flowchart of a queue management method according to an embodiment of the present invention.
  • the driving side describes the queue management method in the embodiment of the present invention in detail. As shown in FIG. 4, the method may include the following steps S401-S402.
  • Step S401 Monitor the occupancy rate of the NVMe queue of the host.
  • the NVMe queue includes a delivery queue SQ or a completion queue CQ, where the SQ is used to deliver the I/O request of the host to the NVMe SSD controller, and the CQ is used to target the NVMe SSD controller.
  • the response of the I/O request is fed back to the Host.
  • the function of monitoring the NVMe queue is to facilitate the dynamic adjustment of the number of the NVMe queues according to the occupancy rate of the NVMe queue.
  • the reason why the NVMe queue includes SQ or CQ is that the number of SQ and CQ in the prior art is a fixed defect, and adjusting the SQ or the CQ can solve the problem of excessive I/O pressure to a certain extent, the former is Solve the problem of excessive I/O request pressure.
  • the latter is a problem that solves the problem of excessive response pressure of I/O requests. If the two are combined, it can solve the problem of excessive I/O request pressure and solve I/I. O request response pressure is too large. Therefore, the NVMe queue in the embodiment of the present invention can bring about a great beneficial effect as long as at least one of SQ and CQ is included. And adjusting the SQ is not necessarily related to adjusting the CQ, and may be performed separately.
  • Step S402 If the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold, increase the number of NVMe queues, and process the I/O data of the host by using the added NVMe queue.
  • the embodiment of the present invention dynamically adjusts and increases SQ queues to increase the ability to accommodate and process I/O requests.
  • the CQ usage of the NVMe queue on the host reaches the preset upper threshold, it is proved that the storage of the I/O request has reached the limit.
  • the CQ needs to be adjusted, that is, the CQ queue is dynamically adjusted and added to increase the capacity. The ability to handle responses to I/O requests.
  • the preset upper threshold value refers to a type of value, which is not a specific value. That is to say, for SQ or for CQ, the value may be the same or different, that is, it can be flexibly set.
  • the invention is not specifically limited thereto.
  • the queue management scheme or the manual adjustment queue is not required for different usage scenarios, and the NVMe queue is dynamically adjusted according to the I/O pressure of the system, and the optimal performance is automatically achieved, and the resource overhead is minimal.
  • the maximum concurrency capability of the NVMe SSD controller can be fully utilized, and the number of queues that cannot be provided by the prior art can be provided to improve performance.
  • FIG. 5 is a schematic flowchart diagram of another queue management method according to an embodiment of the present invention.
  • the queue management method in the embodiment of the present invention will be described in detail below from the NVMe driver side of the host side in conjunction with FIG.
  • the method may include the following steps S501 to S503.
  • Step S501 Monitor the occupancy rate of the NVMe queue of the host, where the NVMe queue includes a delivery queue SQ or a completion queue CQ, where the SQ is used to deliver the I/O request of the host to the NVMe SSD. a controller, the CQ is configured to feed back the response of the NVMe SSD controller for the I/O request to the Host end.
  • Step S502 If the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold, increase the number of NVMe queues, and process the I/O data of the host by using the added NVMe queue.
  • step S501 to step S502 may be correspondingly referred to step S401 to step S402 in the embodiment provided in FIG. 4, and specific implementation manners are not described herein again.
  • the NVMe queue currently includes M SQs, where M is an integer greater than 0; the preset upper threshold is a first preset threshold; If the occupancy rate of the NVMe queue is greater than or equal to the preset upper threshold, the number of the NVMe queues is increased, and the I/O data of the host is processed by the added NVMe queue, including: the average occupation of the M SQs. If the rate is greater than or equal to the first preset threshold, the at least one SQ is added, and the I/O request of the Host is sent to the NVMe SSD controller by using the added at least one SQ.
  • the first preset threshold for example, 80%, that is, 100 queues are currently occupied by 80 I/O requests
  • the number of SQs currently existing at this time is different from The number of requests for I/O is already close to the edge of the bear. Therefore
  • the added at least one SQ is bound to an existing CQ.
  • the final purpose is to use the CQ to feed back the response of the I/O request to the host. Therefore, the SQ must be bound to a CQ to perform a complete I.
  • the O/O request and the response of the corresponding I/O request are specifically bound according to the principle, and may be bound according to the principle of polling or the CQ with low current occupancy rate. Specifically limited.
  • the NVMe queue includes M SQs, the M is an integer greater than 0, the preset lower threshold is a second preset threshold, and the occupancy in the NVMe queue is If the number of the NVMe queues is less than or equal to the preset lower threshold, the number of the NVMe queues is reduced, and the at least one SQ is deleted if the average occupancy of the M SQs is less than or equal to the second preset threshold.
  • the second preset threshold for example, 20%, that is, only 100 I/O requests are currently occupied by 100 queues
  • the number of SQs currently existing at this time is different. The number of requests for I/O may be severely mismatched.
  • the method further includes: reducing the occupancy rate of the at least one SQ waiting to be deleted to 0. It can be understood that before deleting the SQ, it is necessary to ensure that the I/O request in the SQ has been processed, that is, the current occupancy rate of the SQ is 0. Otherwise, the completed I/O request has not been processed in the SQ, and the I/O request is generated. O request is lost, resulting in a system error.
  • the sending of the I/O request by the SQ is prohibited.
  • the specific occupancy rate of each SQ is also monitored to ensure that the occupancy rate of I/O requests on each SQ can be evenly distributed to avoid in some scenarios, all The average occupancy rate of SQ is low, but the occupancy rate of one or more SQs is extremely high, that is, by monitoring the occupancy rate of each SQ, it is guaranteed to be within a third threshold range, when equal to or exceeds the first
  • the threshold is preset, the sending of the I/O request by the SQ is stopped or prohibited, and the SQ is "digested" for a period of time, and further, the SQ that is issued is prohibited from being sent for the I/O request.
  • the issuance of the I/O request by the SQ is resumed. That is, when the overloaded SQ digests the I/O request and returns to the normal occupancy rate (less than or equal to the fourth preset threshold), the I/O request is sent through the SQ. Flexible control of SQ queues is enabled and disabled.
  • Step S503 If the occupancy rate of the NVMe queue is less than or equal to a preset lower threshold, reduce the number of NVMe queues.
  • the NVMe queue includes N CQs, and the N is an integer greater than 0; the preset upper threshold is a fifth preset threshold; and the occupancy in the NVMe queue is If the number of the NVMe queues is greater than or equal to the preset upper threshold, the number of the NVMe queues is increased, and the I/O data of the host is processed by the added NVMe queue, including: the average occupancy of the N CQs is greater than or equal to the first In the case of five preset thresholds, at least one CQ is added, and the response of the I/O request is fed back to the Host by the added at least one CQ.
  • the average occupancy rate of all CQs on the host is greater than or equal to the fifth preset threshold (for example, 80%, that is, 100 CQ queues are currently occupied by responses of 80 I/O requests), it indicates that the current memory is currently stored.
  • the number of CQs and the number of responses to I/O requests are already close to the edge of the bear. Therefore, it is necessary to create at least one CQ by adding, so as to alleviate the pressure of storing the response of the current I/O request. As for how many to increase, the number of responses of the current specific I/O request can be flexibly adjusted.
  • the preset lower threshold value in the embodiment of the present invention and the preset lower threshold in step S702 are literally the same, but the preset lower threshold in the present invention is only a concept, and does not specifically refer to a certain
  • the specific value is only specified when it is received by the actual queue, and the value is also set according to the actual situation.
  • the present invention does not specifically limit this.
  • the NVMe queue includes N CQs, and the N is an integer greater than 0; the preset lower threshold is a sixth preset threshold; and the occupancy in the NVMe queue is In the case that the number of NVMe queues is reduced, the method includes: deleting at least one CQ if the average occupancy of the N CQs is less than or equal to a sixth preset threshold.
  • the sixth preset threshold for example, 20%, that is, the response of only 100 I/O requests in 100 queues is occupied
  • the current CQ is present. The number of responses to the number of requests with I/O may be severely mismatched.
  • the method further includes: the occupancy rate of the at least one CQ waiting to be deleted is reduced to 0, that is, the response of all the I/O requests in the CQ must be waited before the CQ is deleted.
  • the corresponding CPU is fetched (read), otherwise it will cause the loss of the response of the I/O request, resulting in a system error.
  • the host end currently includes M SQs and N CQs, and the M SQs respectively establish corresponding binding relationships with any one of the N CQs.
  • M and the N are both positive integers, and M is greater than or equal to N.
  • the method further includes: receiving an I/O request of the host end; and arbitrarily selecting one target SQ from the M SQs according to a preset rule. Sending the I/O request, and performing feedback of the response of the I/O request by using a CQ bound to the target SQ, where the preset rule includes a polling rule or a rule with low priority.
  • the Host side stores a list of SQ and CQ, and then the Host driver performs SQ and CQ allocation and regulation according to the list of SQ and CQ and the related rules.
  • the specific form of the SQ list may be as shown in FIG. 6 .
  • FIG. 6 is a schematic diagram of a software structure of an SQ list according to an embodiment of the present invention. The essence is a one-way circular linked list. Each node stores an SQ serial number, and the SQ is enabled, and the CQ serial number associated with the SQ.
  • FIG. 7 A software structure diagram of a CQ list provided by an embodiment of the present invention is essentially a two-dimensional unidirectional linked list, and each node stores a CQ serial number, and the current The number of SQs connected, the next CQ pointer, and a pointer to the SQ list.
  • SQ When SQ is added, it is added to the CQ with the lowest number of associated SQs. As shown in Figure 6, the new SQ is associated with CQ1.
  • the main function of the relationship list is to maintain the correspondence between SQ and CQ to ensure the uniformity of CQ queue usage.
  • the embodiment of the present invention also solves the problem of performance degradation caused by I/O unevenness on different CPU cores by using the I on each CPU core. /O is spread across all queues to achieve an even I/O distribution.
  • the embodiment of the present invention can be applied to any scenario, and can be applied to a small pressure use scenario, and can be applied to a large pressure use scenario in the case of a multi-CPU core and a multi-NVMe hard disk, and maintains excellent performance.
  • FIG. 8 is a schematic structural diagram of a queue management device according to an embodiment of the present invention. The structure is described in detail.
  • the device 10 can include: a monitoring module 101 and a first management module 102, wherein
  • the monitoring module 101 is configured to monitor the occupancy rate of the NVMe queue of the host, where the NVMe queue includes a delivery queue SQ or a completion queue CQ, where the SQ is used to deliver the I/O request of the host to the NVMe.
  • An SSD controller, the CQ is configured to feed back, by the NVMe SSD controller, the response of the I/O request to the Host end;
  • the first management module 102 is configured to increase the number of NVMe queues when the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold, and process the I/O data of the host by using the added NVMe queue.
  • the apparatus 10 may further include: a second management module 103, wherein
  • the second management module 103 is configured to reduce the number of NVMe queues when the occupancy rate of the NVMe queue is less than or equal to a preset lower threshold.
  • the NVMe queue currently includes M SQs, and the M is an integer greater than 0; the preset upper threshold is a first preset threshold; the first management module 102 is specifically configured to: in the M If the average occupancy of the SQ is greater than or equal to the first preset threshold, the at least one SQ is added, and the I/O request of the Host is sent to the NVMe SSD controller by using the added at least one SQ.
  • the device 10 may further include:
  • the third management module 104 is configured to bind the added at least one SQ to an existing CQ.
  • the NVMe queue includes M SQs, and the M is an integer greater than 0; the preset lower threshold is a second preset threshold; and the second management module 103 is specifically configured to: in the M If the average occupancy of the SQ is less than or equal to the second preset threshold, at least one SQ is deleted.
  • the second management module 103 is configured to: when the average occupancy of the M SQs is less than or equal to a second preset threshold, the occupancy rate of the at least one SQ waiting to be deleted is reduced to 0. , delete at least one SQ.
  • the device 10 may further include:
  • the fourth management module 105 is configured to prohibit the sending of the I/O request by using the SQ if the occupancy rate of any one of the M SQs is greater than or equal to a third preset threshold.
  • the device 10 may further include:
  • the fifth management module 106 is configured to: when the SQ that is forbidden to perform the I/O request is used, the occupancy rate is less than or In the case of being equal to the fourth preset threshold, the issuance of the I/O request by the SQ is resumed.
  • the NVMe queue includes N CQs, and the N is an integer greater than 0; the preset upper threshold is a fifth preset threshold; the first management module 102 is specifically configured to: in the N If the average occupancy of the CQ is greater than or equal to the fifth preset threshold, the at least one CQ is added, and the response of the I/O request is fed back to the Host by using the added at least one CQ.
  • the NVMe queue includes N CQs, and the N is an integer greater than 0; the preset lower threshold is a sixth preset threshold; and the second management module 103 is specifically configured to: in the N If the average occupancy of the CQ is less than or equal to the sixth preset threshold, at least one CQ is deleted.
  • the second management module 103 is specifically configured to:
  • the occupancy rate of the at least one CQ waiting to be deleted is reduced to 0, and at least one CQ is deleted.
  • the device 10 may further include:
  • the sixth management module 107 is configured to delete all SQs that are bound to the deleted at least one CQ, and wait for the occupancy rate of all the SQs to decrease to 0 before deleting all the SQs.
  • the host end currently includes M SQs and N CQs, and the M SQs respectively establish corresponding binding relationships with any one of the N CQs, and the M and the N are respectively All are positive integers, and M is greater than or equal to N; as shown in FIG. 9, the device 10 may further include:
  • the seventh management module 108 is configured to receive the I/O request of the host, and arbitrarily select a target SQ from the M SQs according to a preset rule, and send the I/O request, and pass the The CQ bound by the target SQ performs feedback of the response of the I/O request, and the preset rule includes a polling rule or a rule with a low priority.
  • FIG. 10 is another queue management device 20 according to an embodiment of the present invention, which is applied to a server system adopting a fast non-volatile storage NVMe protocol, where the server system includes a host host and an NVMe solid state drive.
  • the SSD controller, queue management device 20 may include an input unit 201, an output unit 202, a storage unit 203, and a processing unit 204, in some embodiments of the invention.
  • the bus is used to implement the communication connection between the components.
  • the input unit 201 is specifically a touch panel of the terminal, and includes a touch screen and a touch screen for detecting an operation instruction on the touch panel of the terminal.
  • the output unit 202 may include The display of the terminal is used for outputting, displaying images or data; the storage unit 203 may be a high-speed RAM display or a non-volatile memory, such as at least one disk display, and the storage unit 203 may be Alternatively, at least one display device located away from the aforementioned processing unit 201 may be selected. As shown in FIG. 10, an operating system, a network communication module, a user interface module, and a data processing program may be included in the storage unit 203 as a computer display medium.
  • the storage unit 203 is configured to store program code
  • the processing unit 204 is configured to invoke the program code stored by the storage unit 203 to perform the following steps:
  • the NVMe queue includes a delivery queue SQ or a completion queue CQ, where the SQ is used to deliver the I/O request of the host to the NVMe SSD controller.
  • the CQ is used to feed back the response of the NVMe SSD controller for the I/O request to the Host side;
  • the occupancy of the NVMe queue is greater than or equal to a preset upper threshold, the number of NVMe queues is increased, and the I/O data of the Host is processed by the added NVMe queue.
  • processing unit 204 is further configured to:
  • the occupancy of the NVMe queue is less than or equal to a preset lower threshold, the number of NVMe queues is reduced.
  • the NVMe queue currently includes M SQs, and the M is an integer greater than 0; the preset upper threshold is a first preset threshold;
  • the processing unit 204 is configured to: when the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold, increase the number of NVMe queues, and process the I/O data of the host by using the added NVMe queue, specifically:
  • processing unit 204 is further configured to:
  • the added at least one SQ is bound to an existing CQ.
  • the NVMe queue includes M SQs, and the M is an integer greater than 0; the preset lower threshold is a second preset threshold;
  • the processing unit 204 is configured to reduce the number of NVMe queues when the occupation rate of the NVMe queue is less than or equal to a preset lower threshold, specifically:
  • At least one SQ is deleted.
  • processing unit 204 is further configured to:
  • the occupancy rate of the at least one SQ waiting to be deleted is reduced to zero.
  • processing unit 204 is further configured to:
  • the occupancy rate of any one of the S SQs is greater than or equal to a third preset threshold, the issuance of the I/O request by the SQ is prohibited.
  • processing unit 204 is further configured to:
  • the occupancy rate of the SQ that is sent by the I/O request is less than or equal to the fourth preset threshold, the issuance of the I/O request by the SQ is resumed.
  • the NVMe queue includes N CQs, and the N is an integer greater than 0; the preset upper threshold is a fifth preset threshold;
  • the processing unit 204 is configured to: when the occupancy rate of the NVMe queue is greater than or equal to a preset upper threshold, increase the number of NVMe queues, and process the I/O data of the host by using the added NVMe queue, specifically:
  • the average occupancy of the N CQs is greater than or equal to a fifth preset threshold, at least one CQ is added, and the response of the I/O request is fed back to the Host by using the added at least one CQ. .
  • the NVMe queue includes N CQs, and the N is an integer greater than 0; the preset lower threshold is a sixth preset threshold; and the processing unit 204 is configured to use the NVMe queue less than or Equal to When the lower limit threshold is preset, if the number of NVMe queues is reduced, the details are as follows:
  • processing unit 204 is further configured to:
  • the occupancy rate of the at least one CQ waiting to be deleted is reduced to zero.
  • processing unit 204 is further configured to:
  • All SQs bound to the deleted at least one CQ are deleted, and the occupancy rate of all the SQs is decreased to 0 before deleting all the SQs.
  • the host end currently includes M SQs and N CQs, and the M SQs respectively establish corresponding binding relationships with any one of the N CQs, and the M and the N are respectively All are positive integers, and M is greater than or equal to N;
  • the processing unit is further configured to:
  • the processing unit 204 is further configured to:
  • the preset rule includes a polling rule or a rule with a low priority.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any one of the queue management methods described in the foregoing method embodiments.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the above units is only a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or integrated. Go to another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described above as separate components may or may not be physically separated.
  • the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the above-described integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • the instructions include a plurality of instructions for causing a computer device (which may be a personal computer, server or network device, etc., and in particular a processor in a computer device) to perform all or part of the steps of the above-described methods of various embodiments of the present invention.
  • the foregoing storage medium may include: a U disk, a mobile hard disk, a magnetic disk, an optical disk, a read only memory (English: Read-Only Memory, abbreviation: ROM) or a random access memory (English: Random Access Memory, abbreviation: RAM) and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本发明实施例公开了一种队列管理方法及装置,其中的方法可包括:监控Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端;在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。采用本发明可以可以解决现有技术中NVMe协议的服务器系统中由于NVMe队列数目固定导致的服务器系统数据处理性能下降的问题。

Description

一种队列管理方法及装置 技术领域
本发明涉及计算机技术领域,尤其涉及一种队列管理方法及装置。
背景技术
目前,随着云计算及大数据的迅猛发展,采用快速非易失性存储(Non-Volatile Memory express,NVMe)协议的高速外围组件互连(Peripheral Component Interconnect express,PCIe)固态硬盘(Solid State Disk,SSD),简称NVMe SSD,以其低延迟、低功耗、高读写速度等优势受到诸多应用领域的青睐。
NVMe是一个可扩展的Host端控制接口,如图1所示,图1为NVMe SSD硬件模块结构图,图1中,Host端中的中央处理器(Central Processing Unit,CPU)利用根端口root port通过一个PCIe交换器(PCIe Switch)扩展出多个PCIe下行口,每个下行口可以连接一块采用NVMe协议的PCIe SSD盘(NVMe SSD),从而完成对该CPU的存储空间扩展。其中,每块NVMe SSD内部存在一个SSD控制器,用于解析NVMe协议以及处理输入/输出(Input/Output,I/O)等。
在现有的NVMe协议中,NVMe队列分为管理队列和I/O队列两种,其中管理队列主要用于设备的控制和管理,如I/O队列的创建、删除等;I/O队列主要用于磁盘的访问。I/O队列的递交队列(SubmissIon Queue,SQ)和完成队列(CompletIon Queue,CQ)不是固定一一对应的,可以在创建SQ队列时灵活配置,可以一个SQ对应一个CQ,也可以多个SQ对应一个CQ。
目前,NVMe驱动已经集成到标准Linux内核中,其为了避免对称多处理(Symmetrical Multi-Processing,SMP)环境对队列加锁进行CPU之间的互斥、提高CPU核的高速缓冲存储器Cache的利用率,在每个CPU核上面分别创建一个I/O SQ和一个I/O CQ(即一个SQ对应一个CQ),如图2所示,图2为NVMe驱动中的NVMe队列模型图,Host端可包括多个CPU,CPU可包括多个CPU核,(图2中以一个Host端包含一个CPU,一个CPU包含3个CPU核为例),图2中,主机Host端与NVMe SSD控制器之间的信息交互是基于NVMe队列中的SQ和CQ来实现的,Host端通过SQ下发命令(例如I/O请求),NVMe SSD控制器通过CQ返回命令响应(例如I/O请求响应)。即每个CPU核上的I/O数据都递交到该核对应的SQ中,NVMe SSD控制器从所有SQ中取出I/O数据,处理完成之后,再把处理结果写到对应的CQ中,同时,每个CQ会绑定一个中断,当把处理结果写到CQ之后,会对Host端产生一个中断,以提醒Host端进行处理结果的读取。如此一来,当Host端中的CPU核数量固定之后,则对应的SQ和CQ也就固定了,如果发生某些CPU核上的I/O请求过大时,则很有可能会因为SQ和CQ的个数的固定而导致SQ队列的溢出,只能等待,同时导致NVMe SSD控制器无法发挥最大的并发能力,最终导致Host端整体的数据处理性能下降。
发明内容
本发明实施例提供一种队列管理方法及装置,可以解决现有技术中NVMe协议的服务器系统中由于NVMe队列数目固定导致的服务器系统数据处理性能下降的问题。
第一方面,本发明实施例提供了一种队列管理方法,应用于采用快速非易失性存储NVMe协议的服务器系统中,其特征在于,所述服务器系统包括主机Host端和NVMe固态硬盘SSD控制器,所述队列管理方法可包括:
监控所述Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端;在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
结合第一方面,在第一种可能的实现方式中,所述方法还包括:在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数。
结合第一方面,或者,结合第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述NVMe队列当前包括M个SQ,所述M为大于0的整数;所述预设上限阈值为第一预设阈值;所述在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,包括:在所述M个SQ的平均占用率大于或等于第一预设阈值的情况下,增加至少一个SQ,并通过增加的所述至少一个SQ向所述NVMe SSD控制器下发所述Host端的I/O请求。
结合第一方面的第二种可能的实现方式,在第三种可能的实现方式中,所述方法还包括:将增加的所述至少一个SQ绑定至已有的CQ。
结合第一方面的第一种可能的实现方式,在第四种可能的实现方式中,所述NVMe队列包括M个SQ,所述M为大于0的整数;所述预设下限阈值为第二预设阈值;所述在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数,包括:在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,删除至少一个SQ。
结合第一方面的第四种可能的实现方式,在第五种可能的实现方式中,所述删除至少一个SQ之前,还包括:等待删除的所述至少一个SQ的占用率降为0。
结合第一方面的第二种可能的实现方式,或者,结合第一方面的第三种可能的实现方式,或者,结合第一方面的第四种可能的实现方式,或者,结合第一方面的第五种可能的实现方式,在第六种可能的实现方式中,所述方法还包括:在所述M个SQ中的任意一个SQ的占用率大于或等于第三预设阈值的情况下,禁止通过所述SQ进行I/O请求的下发。
结合第一方面的第六种可能的实现方式,在第七种可能的实现方式中,所述方法还包括:在被禁止进行I/O请求的下发的所述SQ的占用率小于或等于第四预设阈值的情况下,恢复通过所述SQ进行I/O请求的下发。
结合第一方面,或者结合第一方面的第一种可能的实现方式,或者,结合第一方 面的第二种可能的实现方式,或者,结合第一方面的第三种可能的实现方式,或者,结合第一方面的第四种可能的实现方式,或者,结合第一方面的第五种可能的实现方式,或者,结合第一方面的第六种可能的实现方式,或者,结合第一方面的第七种可能的实现方式,在第八种可能的实现方式中,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设上限阈值为第五预设阈值;所述在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,包括:在所述N个CQ的平均占用率大于或等于第五预设阈值的情况下,增加至少一个CQ,并通过增加的所述至少一个CQ向所述Host端反馈所述I/O请求的响应。
结合第一方面的第一种可能的实现方式,在第九种可能的实现方式中,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设下限阈值为第六预设阈值;所述在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数的情况下,包括:在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,删除至少一个CQ。
结合第一方面的第九种可能的实现方式,在第十种可能的实现方式中,所述删除至少一个CQ之前,还包括:等待删除的所述至少一个CQ的占用率降为0。
结合第一方面的第九种可能的实现方式,或者,结合第一方面的第十种可能的实现方式,在第十一种可能的实现方式中,所述方法还包括:删除与删除的所述至少一个CQ进行了绑定的所有SQ,并在删除所述所有SQ之前等待所述所有SQ的占用率降为0。
结合第一方面,或者结合第一方面的第一种可能的实现方式,或者,结合第一方面的第二种可能的实现方式,或者,结合第一方面的第三种可能的实现方式,或者,结合第一方面的第四种可能的实现方式,或者,结合第一方面的第五种可能的实现方式,或者,结合第一方面的第六种可能的实现方式,或者,结合第一方面的第七种可能的实现方式,或者,结合第一方面的第八种可能的实现方式,或者,结合第一方面的第九种可能的实现方式,或者,结合第一方面的第十种可能的实现方式,或者,结合第一方面的第十一种可能的实现方式,在第十二种可能的实现方式中,所述Host端当前包括M个SQ和N个CQ,且所述M个SQ分别与所述N个CQ中的任意一个建立了对应的绑定关系,所述M与所述N均为正整数,且M大于或等于N;所述方法还包括:接收所述Host端的I/O请求;根据预设规则从所述M个SQ中任意选择一个目标SQ进行所述I/O请求的下发,并通过与所述目标SQ绑定的CQ进行所述I/O请求的响应的反馈,所述预设规则包括轮询规则或者占用率低优先的规则。
第二方面,本发明实施例提供了一种队列管理装置,应用于采用快速非易失性存储NVMe协议的服务器系统中,所述服务器系统包括主机Host端和NVMe固态硬盘SSD控制器,所述装置可包括:存储单元和处理单元;
其中,所述存储单元用于存储程序代码,所述处理单元用于调用所述存储单元存储的程序代码执行如下步骤:
监控所述Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或 完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端;在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
结合第二方面,在第一种可能的实现方式中,所述处理单元还用于:在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数。
结合第二方面,或者,结合第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述NVMe队列当前包括M个SQ,所述M为大于0的整数;所述预设上限阈值为第一预设阈值;所述处理单元用于在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,具体为:在所述M个SQ的平均占用率大于或等于第一预设阈值的情况下,增加至少一个SQ,并通过增加的所述至少一个SQ向所述NVMe SSD控制器下发所述Host端的I/O请求。
结合第二方面的第二种可能的实现方式,在第三种可能的实现方式中,所述处理单元还用于:将增加的所述至少一个SQ绑定至已有的CQ。
结合第二方面的第一种可能的实现方式,在第四种可能的实现方式中,所述NVMe队列包括M个SQ,所述M为大于0的整数;所述预设下限阈值为第二预设阈值;所述处理单元用于在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数,具体为:在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,删除至少一个SQ。
结合第二方面的第四种可能的实现方式,在第五种可能的实现方式中,所述处理单元用于删除至少一个SQ之前,还用于:等待删除的所述至少一个SQ的占用率降为0。
结合第二方面的第二种可能的实现方式,或者,结合第二方面的第三种可能的实现方式,或者,结合第二方面的第四种可能的实现方式,或者,结合第二方面的第五种可能的实现方式,在第六种可能的实现方式中,所述处理单元还用于:在所述M个SQ中的任意一个SQ的占用率大于或等于第三预设阈值的情况下,禁止通过所述SQ进行I/O请求的下发。
结合第二方面的第六种可能的实现方式,在第七种可能的实现方式中,所述处理单元还用于:在被禁止进行I/O请求的下发的所述SQ的占用率小于或等于第四预设阈值的情况下,恢复通过所述SQ进行I/O请求的下发。
结合第二方面,或者结合第二方面的第一种可能的实现方式,或者,结合第二方面的第二种可能的实现方式,或者,结合第二方面的第三种可能的实现方式,或者,结合第二方面的第四种可能的实现方式,或者,结合第二方面的第五种可能的实现方式,或者,结合第二方面的第六种可能的实现方式,或者,结合第二方面的第七种可能的实现方式,在第八种可能的实现方式中,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设上限阈值为第五预设阈值;所述处理单元用于在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,具体为:在所述N个CQ的平均占用率 大于或等于第五预设阈值的情况下,增加至少一个CQ,并通过增加的所述至少一个CQ向所述Host端反馈所述I/O请求的响应。
结合第二方面的第一种可能的实现方式,在第九种可能的实现方式中,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设下限阈值为第六预设阈值;所述处理单元用于在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数的情况下,具体为:在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,删除至少一个CQ。
结合第二方面的第九种可能的实现方式,在第十种可能的实现方式中,所述处理单元用于删除至少一个CQ之前,还具体用于:等待删除的所述至少一个CQ的占用率降为0。
结合第二方面的第九种可能的实现方式,或者,结合第二方面的第十种可能的实现方式,在第十一种可能的实现方式中,所述处理单元还用于:删除与删除的所述至少一个CQ进行了绑定的所有SQ,并在删除所述所有SQ之前等待所述所有SQ的占用率降为0。
结合第二方面,或者结合第二方面的第一种可能的实现方式,或者,结合第二方面的第二种可能的实现方式,或者,结合第二方面的第三种可能的实现方式,或者,结合第二方面的第四种可能的实现方式,或者,结合第二方面的第五种可能的实现方式,或者,结合第二方面的第六种可能的实现方式,或者,结合第二方面的第七种可能的实现方式,或者,结合第二方面的第八种可能的实现方式,或者,结合第二方面的第九种可能的实现方式,或者,结合第二方面的第十种可能的实现方式,或者,结合第二方面的第十一种可能的实现方式,在第十二种可能的实现方式中,所述装置还包括输入单元;所述Host端当前包括M个SQ和N个CQ,且所述M个SQ分别与所述N个CQ中的任意一个建立了对应的绑定关系,所述M与所述N均为正整数,且M大于或等于N;所述处理单元还用于:通过所述和输入单元接收所述Host端的I/O请求;根据预设规则从所述M个SQ中任意选择一个目标SQ进行所述I/O请求的下发,并通过与所述目标SQ绑定的CQ进行所述I/O请求的响应的反馈,所述预设规则包括轮询规则或者占用率低优先的规则。
实施本发明实施例,具有如下有益效果:
本发明实施例,通过监控Host端的NVMe队列的占用率,在NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理Host端的I/O数据。可以解决现有技术中NVMe协议的服务器系统中由于NVMe队列数目固定导致的服务器系统数据处理性能下降的问题。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的NVMe SSD硬件模块结构图;
图2是本发明实施例提供的NVMe驱动中的NVMe队列模型图;
图3是本发明实施例提供的Host端的软件架构图;;
图4是本发明实施例提供的一种队列管理方法的流程示意图;
图5是本发明实施例提供的另一种队列管理方法的流程示意图;
图6为本发明实施例提供的SQ列表的软件结构示意图;
图7是本发明实施例提供的CQ列表的软件结构示意图;
图8是本发明实施例提供的一种队列管理装置的结构示意图;
图9是本发明实施例提供的一种队列管理装置的另一实施例的结构示意图;
图10是本发明实施例提供的另一种队列管理装置的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本发明的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
以下,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。
1)递交队列(SubmissIon Queue,SQ)和完成队列(CompletIon Queue,CQ),都是一个先进先出队列(First Input First Output,FIFO)的管道,用于连通主机(Host)端和NVMe SSD控制器。都是一段内存,通常位于主机Host端的双倍速率同步动态随机存储器(Double Data Rate,DDR)空间里。这段内存划分成若干等长的内存块,每一块用于存储一个定常的消息(NVMe的发送消息和完成消息都是定常的)。在使用的时候,对于这个队列,有一个头指针和一个尾指针。当两者相等时,队列是空的。随着新的消息加入到队列中来,尾指针不停向前移动。因为内存是定常的,因此指针一旦移动到队内存的最后一个存储空间,之后再移动的话需要环回到内存的起始位置。因此内存在使用上实际上当作一个环来循环使用。当尾指针的下一个指针就是头指针的时候,这个队列不能再接收新的消息,即队列已经满了。
2)固态硬盘(Solid State Disk,SSD)可以包括,但不仅限于各种类型的非易失性存储器,诸如3维交叉点存储器、闪存、铁电存储器、硅氧化物氮化物氧化物硅(SONOS)存储器、聚合物存储器、纳米线、铁电晶体管随机存取存储器(FeTRAM或 FeRAM)、纳米线或电可擦可编程只读存储器(EEPROM)。
3)术语“服务器”、“存储器服务器”或“远程服务器”或“云服务器”可以可互换地使用,并可以表示例如但不限于:可被“主机计算机”、“主机设备”、“主机”、“客户端设备”、“客户端”、“网络节点”,以及“节点”远程访问的(例如,通过网络连接)的服务器。
4)“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
下面结合附图对本申请的实施例进行描述。
为了便于理解本发明实施例,下面先对本发明实施例所基于的Host端的软件架构进行描述。请参阅图3,图3为本发明实施例提供的Host端的软件架构图,该软件架构中包含了CPU中的多个CPU核001,NVMe驱动002以及M个SQ 003和N个CQ 004,在该软件架构中将CPU中的各个CPU核看做一个整体001(多个CPU核),SQ或CQ不再是与单个的CPU核进行绑定,SQ和CQ之间也无需是一一对应的关系,而是一个CPU中的所有CPU核都可以共用该CPU下的所有SQ或CQ,因此可以进一步均匀该CPU上的各个CPU核的I/O请求。可以理解的是,可以是一个SQ对应一个CQ,也可以是多个SQ对应一个CQ,可以进行灵活设置。其中,NVMe驱动用来维护该驱动中的SQ列表,当某个CPU核上的业务需要下发I/O时,先从该NVMe驱动维护的SQ列表中获取一个SQ序号,然后把I/O下发到该序号的SQ队列中。并在此过程中,监控单个SQ或CQ、整体SQ、整体CQ的占用率,当达到预设的阈值时,增加或删除SQ或CQ队列,以及维护SQ和CQ的对应关系。可以理解的是,本发明中的队列管理方法可以应用在图3中在Host端的NVMe驱动中,以上图3中的软件架构只是本发明实施例中较优的一种实施方式,本发明实施例中的软件架构包括但不仅限于以上软件架构。
可以理解的是,本发明所提供的Host端可以应用于采用快速非易失性存储NVMe协议的服务器系统中,该服务器系统可以包括应用了本发明提供的队列管理方法的主机Host端、通过NVMe协议扩展出的多个NVMe SSD,以及NVMe SSD控制器等,本发明对本发明提供的队列管理方法所应用的服务器系统的具体结构不作限定。所述Host端通过NVMe队列与所述NVMe SSD控制器进行信息交互,参见图4,图4是本发明实施例提供的一种队列管理方法的流程示意图,下面将结合附图4从Host端的NVMe驱动侧对本发明实施例中的队列管理方法进行详细描述。如图4所示,该方法可以包括以下步骤S401-步骤S402。
步骤S401:监控所述Host端的NVMe队列的占用率。
具体地,NVMe队列包括递交队列SQ或完成队列CQ,其中,SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端。本发明实施例中,监控NVMe队列的作用在于,便于后续根据所述NVMe队列的占用率动态地调整所述NVMe队列的个数。其中,NVMe队列包括SQ或CQ的原因在于,针对现有技术中SQ和CQ的数目是固定的缺陷,调整SQ或者是CQ都可以在一定程度上解决I/O压力过大的问题,前者是 解决I/O请求压力过大的问题,后者是解决I/O请求的响应压力过大的问题,两者若结合则可以既解决I/O请求压力过大的问题,又可以解决I/O请求的响应压力过大的问题。因此,本发明实施例中的NVMe队列只要至少包括SQ和CQ中的至少一种,都可以带来极大的有益效果。并且调整SQ的与调整CQ不必然相关联,可以是分开进行的。
步骤S402:在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
具体地,当Host端的NVMe队列中的SQ队列的占用率达到预设上限阈值,证明此时I/O请求的处理已经达到极限,需要对SQ队列进行调整,本发明实施例通过动态调整并增加SQ队列来增加容纳和处理I/O请求的能力。当Host端的NVMe队列中的CQ的占用率达到预设上限阈值,证明此时I/O请求的响应的存放已经达到极限,需要对CQ进行调整,即通过动态调整并增加CQ队列来增加容纳和处理I/O请求的响应的能力。需要说明的是,SQ的监控与CQ之间的监控互不干扰,即可以只监控SQ,也可以只监控CQ,还可以同时监控SQ和CQ。此处的预设上限阈值是指一类取值,并不是一个具体的取值,也就是说,针对SQ或者针对CQ时,其取值可以相同,也可以不同,即可以灵活设定,本发明对此不作具体限定。
本发明实施例,不需要针对不同的使用场景去设计队列管理方案或者手动调节队列,而会根据系统的I/O压力会动态调整NVMe队列,自动达到最佳性能,且资源开销最少。同时又可以充分利用NVMe SSD控制器的最大并发能力,提供现有技术无法提供的队列个数,提高性能。
参见图5,图5是本发明实施例提供的另一种队列管理方法的流程示意图。下面将结合附图5从Host端的NVMe驱动侧对本发明实施例中的队列管理方法进行详细描述。该方法可以包括以下步骤S501-步骤S503。
步骤S501:监控所述Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端。
步骤S502:在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
具体地,步骤S501至步骤S502可以对应地参考图4提供的实施例中的步骤S401至步骤S402,具体的实现方式,这里不再赘述。
针对步骤S502,在一种可能的实现方式中,所述NVMe队列当前包括M个SQ,所述M为大于0的整数;所述预设上限阈值为第一预设阈值;所述在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,包括:在所述M个SQ的平均占用率大于或等于第一预设阈值的情况下,增加至少一个SQ,并通过增加的所述至少一个SQ向所述NVMe SSD控制器下发所述Host端的I/O请求。当Host端的所有SQ的平均占用率,大于或等于第一预设阈值(例如80%,即100个队列当前被80个I/O请求所占用),则说明此时当前存在的SQ的数量与I/O的请求数已经接近承受的边缘。因此需 要通过增加即创建至少一个SQ,来缓解当前的I/O压力,至于具体增加多少个,可以根据当前的具体的I/O请求数,进行灵活调控。
进一步地,将增加的所述至少一个SQ绑定至已有的CQ。由于SQ将I/O请求下发至SQ后,其最终目的是需要CQ配合将该I/O请求的响应反馈至Host端,所以SQ必须绑定到某个CQ上,才能进行一次完整的I/O请求以及对应的I/O请求的响应,具体的按照何种原则来进行绑定,可以按照轮询的原则或者是当前占用率低的CQ优先进行绑定的顺序,本发明对此不作具体限定。
在一种可能的实现方式中,所述NVMe队列包括M个SQ,所述M为大于0的整数;所述预设下限阈值为第二预设阈值;所述在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数,包括:在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,删除至少一个SQ。当Host端的所有SQ的平均占用率,小于或等于第二预设阈值(例如20%,即100个队列当前只有20个I/O请求在占用),则说明此时当前存在的SQ的数量与I/O的请求数可能严重不匹配。因此需要通过减少即删除至少一个SQ,以释放内存空间来减少系统资源的浪费,包括内存空间等资源,可以理解的是,假设当前Host端只有一个SQ,则可以不删除,以免有I/O请求时又需要重新创建。
进一步地,所述删除至少一个SQ之前,还包括:等待删除的所述至少一个SQ的占用率降为0。可以理解的是,在删除SQ之前需要确保SQ中的I/O请求已经处理完成,即SQ当前的占用率为0,否则为误删除SQ中还没有处理完成的I/O请求,产生I/O请求丢失,导致系统错误。
在一种可能的实现方式中,在所述M个SQ中的任意一个SQ的占用率大于或等于第三预设阈值的情况下,禁止通过所述SQ进行I/O请求的下发。若在监控的过程中,还对具体的每一个SQ的占用率进行监控,以保证每一个SQ上的I/O请求的占用率都可以的到平均分配,以避免在某些场景中,所有SQ的平均占用率较低,但是其中的一个或多个SQ的占用率超高的情况,即通过监控每一个SQ的占用率,使其保证在一个第三阈值范围内,当等于或超过第三预设阈值时,则停止或禁止通过该SQ进行I/O请求的下发,让该SQ“消化”一段时间,进一步地,在被禁止进行I/O请求的下发的所述SQ的占用率小于或等于第四预设阈值的情况下,恢复通过所述SQ进行I/O请求的下发。即当超负荷的SQ消化完I/O请求,回到正常占用率时(小于或等于第四预设阈值),又恢复通过该SQ进行I/O请求的下发。以灵活调控SQ队列开启与禁止。
步骤S503:在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数。
在一种可能的实现方式中,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设上限阈值为第五预设阈值;所述在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,包括:在所述N个CQ的平均占用率大于或等于第五预设阈值的情况下,增加至少一个CQ,并通过增加的所述至少一个CQ向所述Host端反馈所述I/O请求的响应。当Host端的所有CQ的平均占用率,大于或等于第五预设阈值(例如80%,即100个CQ队列当前被80个I/O请求的响应所占用),则说明此时当前存 在的CQ的数量与I/O的请求的响应数已经接近承受的边缘。因此需要通过增加即创建至少一个CQ,来缓解当前的I/O请求的响应的存放的压力,至于具体增加多少个,可以根据当前的具体的I/O请求的响应数,进行灵活调控。可以理解的是本发明实施例中的预设下限阈值与步骤S702中的预设下限阈值虽然从字面上来看是相同的,但是本发明中的预设下限阈值只是一个概念,并非具体指某个具体值,只有具体接到实际的队列时,才赋予其具体的取值,并且其取值也可以根据实际情况进行灵活设置,本发明对此不作具体限定。
在一种可能的实现方式中,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设下限阈值为第六预设阈值;所述在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数的情况下,包括:在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,删除至少一个CQ。当Host端的所有SQ的平均占用率,小于或等于第六预设阈值(例如20%,即100个队列当前只有20个I/O请求的响应在占用),则说明此时当前存在的CQ的数量与I/O的请求的响应数可能严重不匹配。因此需要通过减少即删除至少一个CQ,以释放内存空间来减少系统资源的浪费,包括内存空间以及中断等资源,可以理解的是,假设当前Host端只有一个CQ,则可以不删除,以免当有I/O请求的响应时又需要重新创建。进一步地,所述删除至少一个CQ之前,还包括:等待删除的所述至少一个CQ的占用率降为0,即在删除CQ之前也必须等待该CQ中的所有I/O请求的响应被相对应的CPU所取出(读取),否则会导致I/O请求的响应的丢失,导致系统错误。再进一步地,删除与删除的所述至少一个CQ进行了绑定的所有SQ,并在删除所述所有SQ之前等待所述所有SQ的占用率降为0,因为每个SQ都必须和某一个CQ进行绑定,因此删除CQ的同时必然也会影响到与该CQ进行了绑定的SQ,因而也需要等待这些SQ的占用率降为0,才可以删除该CQ,否则会导致部分SQ都没有CQ进行对应的I/O请求的响应的反馈的局面,最终导致系统错误。
在一种可能的实现方式中,所述Host端当前包括M个SQ和N个CQ,且所述M个SQ分别与所述N个CQ中的任意一个建立了对应的绑定关系,所述M与所述N均为正整数,且M大于或等于N;所述方法还包括:接收所述Host端的I/O请求;根据预设规则从所述M个SQ中任意选择一个目标SQ进行所述I/O请求的下发,并通过与所述目标SQ绑定的CQ进行所述I/O请求的响应的反馈,所述预设规则包括轮询规则或者占用率低优先的规则。具体实施方式中,可以是Host端存储有SQ和CQ的列表,然后Host驱动根据SQ和CQ的列表并结合相关规则进行SQ和CQ的分配与调控,其中,SQ列表的具体形式可以如图6所示,图6为本发明实施例提供的SQ列表的软件结构示意图,其本质是一个单向循环链表,每个节点中存储了SQ序号,该SQ是否使能,该SQ关联的CQ序号。另外,这里还涉及到两个全局指针,一个用来指示新的节点添加到哪个节点后面,并在加入新增节点之后指向新增节点;一个用来指示下一个I/O应该发往哪个节点,并在读取之后自动移到下一个节点。SQ列表的主要功能是为各CPU核上的业务选择一个SQ用于I/O的下发,并保证了SQ队列使用的均匀性;CQ列表的具体形式可以如图7所示,图7为本发明实施例提供的CQ列表的软件结构示意图,其本质是一个二维单向链表,每个节点中存储了CQ序号,当前关 联的SQ个数,下一个CQ指针,以及指向SQ链表的指针。当新增SQ时,添加到当前关联SQ个数最少的CQ上,如图6所示,则新增SQ关联到CQ1上面。关系列表的主要功能是维护SQ和CQ的对应关系,保证CQ队列使用的均匀性。
本发明实施例除了可以兼顾上述步骤S401和步骤S402对应的实施例带来的有益效果,还解决了不同CPU核上I/O不均匀导致的性能下降问题,通过把每个CPU核上的I/O分散到所有的队列中去,达到I/O均匀分布的效果。同时,本发明实施例可以适合任何场景,既可以适用于小压力的使用场景,又可以适用于多CPU核、多NVMe硬盘情况下的大压力的使用场景,且保持优良的性能。
本发明实施例还提供了一种队列管理装置10,如图8所示,图8是本发明实施例中的一种队列管理装置的结构示意图,下面将结合附图8,对通信装置10的结构进行详细介绍。该装置10可包括:监控模块101和第一管理模块102,其中
监控模块101,监控所述Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端;
第一管理模块102,用于在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
具体地,如图9所示的本发明提供的一种队列管理装置的另一实施例的结构示意图,装置10,还可以包括:第二管理模块103,其中
第二管理模块103,用于在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数。
进一步地,所述NVMe队列当前包括M个SQ,所述M为大于0的整数;所述预设上限阈值为第一预设阈值;第一管理模块102,具体用于:在所述M个SQ的平均占用率大于或等于第一预设阈值的情况下,增加至少一个SQ,并通过增加的所述至少一个SQ向所述NVMe SSD控制器下发所述Host端的I/O请求。
再进一步地,如图9所示,装置10还可以包括:
第三管理模块104,用于将增加的所述至少一个SQ绑定至已有的CQ。
再进一步地,所述NVMe队列包括M个SQ,所述M为大于0的整数;所述预设下限阈值为第二预设阈值;第二管理模块103,具体用于:在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,删除至少一个SQ。
再进一步地,第二管理模块103,具体用于:在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,等待删除的所述至少一个SQ的占用率降为0,删除至少一个SQ。
再进一步地,如图9所示,装置10还可以包括:
第四管理模块105,用于在所述M个SQ中的任意一个SQ的占用率大于或等于第三预设阈值的情况下,禁止通过所述SQ进行I/O请求的下发。
再进一步地,如图9所示,装置10还可以包括:
第五管理模块106,用于在被禁止进行I/O请求的下发的所述SQ的占用率小于或 等于第四预设阈值的情况下,恢复通过所述SQ进行I/O请求的下发。
再进一步地,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设上限阈值为第五预设阈值;第一管理模块102,具体用于:在所述N个CQ的平均占用率大于或等于第五预设阈值的情况下,增加至少一个CQ,并通过增加的所述至少一个CQ向所述Host端反馈所述I/O请求的响应。
再进一步地,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设下限阈值为第六预设阈值;第二管理模块103,具体用于:在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,删除至少一个CQ。
再进一步地,第二管理模块103,具体用于:
在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,等待删除的所述至少一个CQ的占用率降为0,删除至少一个CQ。
再进一步地,如图9所示,装置10还可以包括:
第六管理模块107,用于删除与删除的所述至少一个CQ进行了绑定的所有SQ,并在删除所述所有SQ之前等待所述所有SQ的占用率降为0。
再进一步地,所述Host端当前包括M个SQ和N个CQ,且所述M个SQ分别与所述N个CQ中的任意一个建立了对应的绑定关系,所述M与所述N均为正整数,且M大于或等于N;如图9所示,装置10还可以包括:
第七管理模块108,用于接收所述Host端的I/O请求,并根据预设规则从所述M个SQ中任意选择一个目标SQ进行所述I/O请求的下发,并通过与所述目标SQ绑定的CQ进行所述I/O请求的响应的反馈,所述预设规则包括轮询规则或者占用率低优先的规则。
可理解的是,队列管理装置10中各模块的功能可对应参考上述图2-图7对应的方法实施例中的具体实现方式,这里不再赘述。
请参见图10,图10是本发明实施例提供的另一种队列管理装置20,应用于采用快速非易失性存储NVMe协议的服务器系统中,所述服务器系统包括主机Host端和NVMe固态硬盘SSD控制器,队列管理装置20可以包括:输入单元201、输出单元202、存储单元203和处理单元204,在本发明的一些实施例中。其中,总线用于实现这些组件之间的通信连接;输入单元201具体可为终端的触控面板,包括触摸屏和触控屏,用于检测终端触控面板上的操作指令;输出单元202可以包括终端的显示屏(Display),用于输出、显示图像或者数据;存储单元203可以是高速RAM显示器,也可以是非不稳定的显示器(non-volatile memory),例如至少一个磁盘显示器,存储单元203可选的还可以是至少一个位于远离前述处理单元201的显示装置。如图10所示,作为一种计算机显示介质的存储单元203中可以包括操作系统、网络通信模块、用户接口模块以及数据处理程序。
其中,所述存储单元203用于存储程序代码,处理单元204用于调用所述存储单元203存储的程序代码执行如下步骤:
监控所述Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述 Host端;
在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
具体地,处理单元204还用于:
在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数。
进一步地,所述NVMe队列当前包括M个SQ,所述M为大于0的整数;所述预设上限阈值为第一预设阈值;
处理单元204用于在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,具体为:
在所述M个SQ的平均占用率大于或等于第一预设阈值的情况下,增加至少一个SQ,并通过增加的所述至少一个SQ向所述NVMe SSD控制器下发所述Host端的I/O请求。
再进一步地,处理单元204还用于:
将增加的所述至少一个SQ绑定至已有的CQ。
再进一步地,所述NVMe队列包括M个SQ,所述M为大于0的整数;所述预设下限阈值为第二预设阈值;
处理单元204用于在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数,具体为:
在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,删除至少一个SQ。
再进一步地,处理单元204用于删除至少一个SQ之前,还用于:
等待删除的所述至少一个SQ的占用率降为0。
再进一步地,处理单元204还用于:
在所述M个SQ中的任意一个SQ的占用率大于或等于第三预设阈值的情况下,禁止通过所述SQ进行I/O请求的下发。
再进一步地,处理单元204还用于:
在被禁止进行I/O请求的下发的所述SQ的占用率小于或等于第四预设阈值的情况下,恢复通过所述SQ进行I/O请求的下发。
再进一步地,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设上限阈值为第五预设阈值;
处理单元204用于在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,具体为:
在所述N个CQ的平均占用率大于或等于第五预设阈值的情况下,增加至少一个CQ,并通过增加的所述至少一个CQ向所述Host端反馈所述I/O请求的响应。
再进一步地,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设下限阈值为第六预设阈值;处理单元204用于在所述NVMe队列的占用率小于或等于 预设下限阈值的情况下,减少NVMe队列的个数的情况下,具体为:
在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,删除至少一个CQ。
再进一步地,处理单元204用于删除至少一个CQ之前,还具体用于:
等待删除的所述至少一个CQ的占用率降为0。
再进一步地,处理单元204还用于:
删除与删除的所述至少一个CQ进行了绑定的所有SQ,并在删除所述所有SQ之前等待所述所有SQ的占用率降为0。
再进一步地,所述Host端当前包括M个SQ和N个CQ,且所述M个SQ分别与所述N个CQ中的任意一个建立了对应的绑定关系,所述M与所述N均为正整数,且M大于或等于N;所述处理单元还用于:处理单元204还用于:
通过输入单元201接收所述Host端的I/O请求;
根据预设规则从所述M个SQ中任意选择一个目标SQ进行所述I/O请求的下发,并通过与所述目标SQ绑定的CQ进行所述I/O请求的响应的反馈,所述预设规则包括轮询规则或者占用率低优先的规则。
可理解的是,队列管理装置20中各单元的功能可对应参考上述图2-图7对应的方法实施例中的具体实现方式,这里不再赘述。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任意一种队列管理方法的部分或全部步骤。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可能可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的 单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以为个人计算机、服务器或者网络设备等,具体可以是计算机设备中的处理器)执行本发明各个实施例上述方法的全部或部分步骤。其中,而前述的存储介质可包括:U盘、移动硬盘、磁碟、光盘、只读存储器(英文:Read-Only Memory,缩写:ROM)或者随机存取存储器(英文:Random Access Memory,缩写:RAM)等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (26)

  1. 一种队列管理方法,应用于采用快速非易失性存储NVMe协议的服务器系统中,其特征在于,所述服务器系统包括主机Host端和NVMe固态硬盘SSD控制器,所述方法包括:
    监控所述Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端;
    在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数。
  3. 如权利要求1或2所述的方法,其特征在于,所述NVMe队列当前包括M个SQ,所述M为大于0的整数;所述预设上限阈值为第一预设阈值;
    所述在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,包括:
    在所述M个SQ的平均占用率大于或等于第一预设阈值的情况下,增加至少一个SQ,并通过增加的所述至少一个SQ向所述NVMe SSD控制器下发所述Host端的I/O请求。
  4. 如权利要求3所述的方法,其特征在于,所述方法还包括:
    将增加的所述至少一个SQ绑定至已有的CQ。
  5. 如权利要求2所述的方法,其特征在于,所述NVMe队列包括M个SQ,所述M为大于0的整数;所述预设下限阈值为第二预设阈值;
    所述在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数,包括:
    在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,删除至少一个SQ。
  6. 如权利要求5所述的方法,其特征在于,所述删除至少一个SQ之前,还包括:
    等待删除的所述至少一个SQ的占用率降为0。
  7. 如权利要求3-6任意一项所述的方法,其特征在于,所述方法还包括:
    在所述M个SQ中的任意一个SQ的占用率大于或等于第三预设阈值的情况下,禁止通过所述SQ进行I/O请求的下发。
  8. 如权利要求7所述的方法,其特征在于,所述方法还包括:
    在被禁止进行I/O请求的下发的所述SQ的占用率小于或等于第四预设阈值的情况下,恢复通过所述SQ进行I/O请求的下发。
  9. 如权利要求1-8任意一项所述的方法,其特征在于,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设上限阈值为第五预设阈值;
    所述在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,包括:
    在所述N个CQ的平均占用率大于或等于第五预设阈值的情况下,增加至少一个CQ,并通过增加的所述至少一个CQ向所述Host端反馈所述I/O请求的响应。
  10. 如权利要求2所述的方法,其特征在于,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设下限阈值为第六预设阈值;所述在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数的情况下,包括:
    在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,删除至少一个CQ。
  11. 如权利要求10所述的方法,其特征在于,所述删除至少一个CQ之前,还包括:
    等待删除的所述至少一个CQ的占用率降为0。
  12. 如权利要求10或11所述的方法,其特征在于,所述方法还包括:
    删除与删除的所述至少一个CQ进行了绑定的所有SQ,并在删除所述所有SQ之前等待所述所有SQ的占用率降为0。
  13. 如权利要求1-12任意一项所述的方法,其特征在于,所述Host端当前包括M个SQ和N个CQ,且所述M个SQ分别与所述N个CQ中的任意一个建立了对应的绑定关系,所述M与所述N均为正整数,且M大于或等于N;所述方法还包括:
    接收所述Host端的I/O请求;
    根据预设规则从所述M个SQ中任意选择一个目标SQ进行所述I/O请求的下发,并通过与所述目标SQ绑定的CQ进行所述I/O请求的响应的反馈,所述预设规则包括轮询规则或者占用率低优先的规则。
  14. 一种队列管理装置,应用于采用快速非易失性存储NVMe协议的服务器系统中,其特征在于,所述服务器系统包括主机Host端和NVMe固态硬盘SSD控制器,所述装置包括:存储单元和处理单元;
    其中,所述存储单元用于存储程序代码,所述处理单元用于调用所述存储单元存储的程序代码执行如下步骤:
    监控所述Host端的NVMe队列的占用率,所述NVMe队列包括递交队列SQ或完成队列CQ,其中,所述SQ用于将所述Host端的I/O请求下发至所述NVMe SSD控制器,所述CQ用于将所述NVMe SSD控制器针对所述I/O请求的响应反馈至所述Host端;
    在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据。
  15. 如权利要求14所述的装置,其特征在于,所述处理单元还用于:
    在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数。
  16. 如权利要求14或15所述的装置,其特征在于,所述NVMe队列当前包括M个SQ,所述M为大于0的整数;所述预设上限阈值为第一预设阈值;
    所述处理单元用于在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,具体为:
    在所述M个SQ的平均占用率大于或等于第一预设阈值的情况下,增加至少一个SQ,并通过增加的所述至少一个SQ向所述NVMe SSD控制器下发所述Host端的I/O请求。
  17. 如权利要求16所述的装置,其特征在于,所述处理单元还用于:
    将增加的所述至少一个SQ绑定至已有的CQ。
  18. 如权利要求15所述的装置,其特征在于,所述NVMe队列包括M个SQ,所述M为大于0的整数;所述预设下限阈值为第二预设阈值;
    所述处理单元用于在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数,具体为:
    在所述M个SQ的平均占用率小于或等于第二预设阈值的情况下,删除至少一个SQ。
  19. 如权利要求18所述的装置,其特征在于,所述处理单元用于删除至少一个SQ之前,还用于:
    等待删除的所述至少一个SQ的占用率降为0。
  20. 如权利要求16-19任意一项所述的装置,其特征在于,所述处理单元还用于:
    在所述M个SQ中的任意一个SQ的占用率大于或等于第三预设阈值的情况下,禁止通过所述SQ进行I/O请求的下发。
  21. 如权利要求20所述的装置,其特征在于,所述处理单元还用于:
    在被禁止进行I/O请求的下发的所述SQ的占用率小于或等于第四预设阈值的情况下,恢复通过所述SQ进行I/O请求的下发。
  22. 如权利要求14-21任意一项所述的装置,其特征在于,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设上限阈值为第五预设阈值;
    所述处理单元用于在所述NVMe队列的占用率大于或等于预设上限阈值的情况下,增加NVMe队列的个数,并通过增加的NVMe队列处理所述Host端的I/O数据,具体为:
    在所述N个CQ的平均占用率大于或等于第五预设阈值的情况下,增加至少一个CQ,并通过增加的所述至少一个CQ向所述Host端反馈所述I/O请求的响应。
  23. 如权利要求15所述的装置,其特征在于,所述NVMe队列包括N个CQ,所述N为大于0的整数;所述预设下限阈值为第六预设阈值;所述处理单元用于在所述NVMe队列的占用率小于或等于预设下限阈值的情况下,减少NVMe队列的个数的情况下,具体为:
    在所述N个CQ的平均占用率小于或等于第六预设阈值的情况下,删除至少一个CQ。
  24. 如权利要求23所述的装置,其特征在于,所述处理单元用于删除至少一个CQ之前,还具体用于:
    等待删除的所述至少一个CQ的占用率降为0。
  25. 如权利要求23或24所述的装置,其特征在于,所述处理单元还用于:
    删除与删除的所述至少一个CQ进行了绑定的所有SQ,并在删除所述所有SQ之前等待所述所有SQ的占用率降为0。
  26. 如权利要求14-25任意一项所述的装置,其特征在于,所述装置还包括输入单元;所述Host端当前包括M个SQ和N个CQ,且所述M个SQ分别与所述N个CQ中的任意一个建立了对应的绑定关系,所述M与所述N均为正整数,且M大于或等于N;所述处理单元还用于:
    通过所述输入单元接收所述Host端的I/O请求;
    根据预设规则从所述M个SQ中任意选择一个目标SQ进行所述I/O请求的下发,并通过与所述目标SQ绑定的CQ进行所述I/O请求的响应的反馈,所述预设规则包括轮询规则或者占用率低优先的规则。
PCT/CN2017/092817 2016-09-14 2017-07-13 一种队列管理方法及装置 WO2018049899A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610824804.2 2016-09-14
CN201610824804.2A CN107818056B (zh) 2016-09-14 2016-09-14 一种队列管理方法及装置

Publications (1)

Publication Number Publication Date
WO2018049899A1 true WO2018049899A1 (zh) 2018-03-22

Family

ID=61600852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/092817 WO2018049899A1 (zh) 2016-09-14 2017-07-13 一种队列管理方法及装置

Country Status (2)

Country Link
CN (1) CN107818056B (zh)
WO (1) WO2018049899A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111208948A (zh) * 2020-01-13 2020-05-29 华东师范大学 一种基于混合式存储的请求分发方法
CN111831219A (zh) * 2019-04-19 2020-10-27 慧与发展有限责任合伙企业 存储级存储器队列深度阈值调整
CN111858011A (zh) * 2020-07-31 2020-10-30 深圳大普微电子科技有限公司 一种多数据流任务处理方法、装置、设备及存储介质
CN111857579A (zh) * 2020-06-30 2020-10-30 广东浪潮大数据研究有限公司 一种ssd盘片控制器复位方法、系统、装置及可读存储介质
WO2022043792A1 (en) * 2020-08-31 2022-03-03 International Business Machines Corporation Input/output queue hinting for resource utilization
US11599271B2 (en) 2021-02-12 2023-03-07 Hewlett Packard Enterprise Development Lp Controlling I/O Q-connections in NVMe devices

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108549610B (zh) * 2018-03-27 2021-02-23 深圳忆联信息系统有限公司 一种NVMe扩展的实现方法及固态硬盘
CN108897491B (zh) * 2018-05-30 2021-07-23 郑州云海信息技术有限公司 一种异构混合内存快速访问优化方法及系统
CN111277616B (zh) * 2018-12-04 2023-11-03 中兴通讯股份有限公司 一种基于rdma的数据传输方法和分布式共享内存系统
US11216190B2 (en) * 2019-06-10 2022-01-04 Samsung Electronics Co., Ltd. Systems and methods for I/O transmissions in queue pair-based NVMeoF initiator-target system
CN112463028B (zh) * 2020-10-29 2023-01-10 苏州浪潮智能科技有限公司 一种i/o处理方法、系统、设备及计算机可读存储介质
CN114691026A (zh) * 2020-12-31 2022-07-01 华为技术有限公司 一种数据访问方法及相关设备
CN114265797B (zh) * 2021-12-01 2024-02-27 杭州海康存储科技有限公司 存储访问控制装置、硬盘设备及方法
CN116795298B (zh) * 2023-08-28 2023-11-24 麒麟软件有限公司 一种Linux下NVME存储器的IO优化方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020165761A1 (en) * 2001-03-29 2002-11-07 Fujitsu Limited Daily delivered articles order optimization support system, method, and storage medium therefor
CN102088395A (zh) * 2009-12-02 2011-06-08 杭州华三通信技术有限公司 一种调整媒体数据缓存的方法和装置
CN105892945A (zh) * 2016-03-30 2016-08-24 联想(北京)有限公司 一种信息更新方法及电子设备

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7299324B2 (en) * 2003-11-05 2007-11-20 Denali Software, Inc. Reactive placement controller for interfacing with banked memory storage
US20060101469A1 (en) * 2004-11-10 2006-05-11 International Business Machines (Ibm) Corporation Method, controller, program product and services for managing resource element queues
WO2009002325A1 (en) * 2007-06-28 2008-12-31 Thomson Licensing Queue-based adaptive chunk scheduling for peer-to-peer live streaming
KR20100118271A (ko) * 2009-04-28 2010-11-05 삼성전자주식회사 컴퓨터 시스템에서 하드디스크 드라이브 보호를 위한 큐 오버플로우 방지 방법 및 장치
CN101620618B (zh) * 2009-07-24 2011-11-30 中兴通讯股份有限公司 内存存储数据的维护方法与装置
CN102377682B (zh) * 2011-12-12 2014-07-23 西安电子科技大学 基于定长单元存储变长分组的队列管理方法及设备
CN102591715B (zh) * 2012-01-05 2014-02-12 北京航空航天大学 一种使用多队列技术的虚拟机网络性能优化的实现方法
US10122645B2 (en) * 2012-12-07 2018-11-06 Cisco Technology, Inc. Output queue latency behavior for input queue based device
CN104426790B (zh) * 2013-08-26 2019-02-26 中兴通讯股份有限公司 对多队列的缓存空间进行分配控制的方法及装置
CN104750543B (zh) * 2013-12-26 2018-06-15 杭州华为数字技术有限公司 线程创建方法、业务请求处理方法及相关设备
CN103945548B (zh) * 2014-04-29 2018-12-14 西安电子科技大学 一种c-ran网络中的资源分配系统及任务/业务调度方法
US9304690B2 (en) * 2014-05-07 2016-04-05 HGST Netherlands B.V. System and method for peer-to-peer PCIe storage transfers
CN104125166B (zh) * 2014-07-31 2018-05-29 华为技术有限公司 一种队列调度方法及计算系统
CN104407820B (zh) * 2014-12-12 2016-08-17 华为技术有限公司 基于固态硬盘存储系统的数据处理方法、装置以及系统
KR102336443B1 (ko) * 2015-02-04 2021-12-08 삼성전자주식회사 가상화 기능을 지원하는 스토리지 장치 및 사용자 장치

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020165761A1 (en) * 2001-03-29 2002-11-07 Fujitsu Limited Daily delivered articles order optimization support system, method, and storage medium therefor
CN102088395A (zh) * 2009-12-02 2011-06-08 杭州华三通信技术有限公司 一种调整媒体数据缓存的方法和装置
CN105892945A (zh) * 2016-03-30 2016-08-24 联想(北京)有限公司 一种信息更新方法及电子设备

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831219A (zh) * 2019-04-19 2020-10-27 慧与发展有限责任合伙企业 存储级存储器队列深度阈值调整
US11030107B2 (en) 2019-04-19 2021-06-08 Hewlett Packard Enterprise Development Lp Storage class memory queue depth threshold adjustment
CN111208948A (zh) * 2020-01-13 2020-05-29 华东师范大学 一种基于混合式存储的请求分发方法
CN111857579A (zh) * 2020-06-30 2020-10-30 广东浪潮大数据研究有限公司 一种ssd盘片控制器复位方法、系统、装置及可读存储介质
CN111857579B (zh) * 2020-06-30 2024-02-09 广东浪潮大数据研究有限公司 一种ssd盘片控制器复位方法、系统、装置及可读存储介质
CN111858011A (zh) * 2020-07-31 2020-10-30 深圳大普微电子科技有限公司 一种多数据流任务处理方法、装置、设备及存储介质
WO2022043792A1 (en) * 2020-08-31 2022-03-03 International Business Machines Corporation Input/output queue hinting for resource utilization
US11604743B2 (en) 2020-08-31 2023-03-14 International Business Machines Corporation Input/output queue hinting for resource utilization
US11960417B2 (en) 2020-08-31 2024-04-16 International Business Machines Corporation Input/output queue hinting for resource utilization
US11599271B2 (en) 2021-02-12 2023-03-07 Hewlett Packard Enterprise Development Lp Controlling I/O Q-connections in NVMe devices

Also Published As

Publication number Publication date
CN107818056A (zh) 2018-03-20
CN107818056B (zh) 2021-09-07

Similar Documents

Publication Publication Date Title
WO2018049899A1 (zh) 一种队列管理方法及装置
WO2018076793A1 (zh) 一种NVMe数据读写方法及NVMe设备
US8850090B2 (en) USB redirection for read transactions
US11290392B2 (en) Technologies for pooling accelerator over fabric
US9244881B2 (en) Facilitating, at least in part, by circuitry, accessing of at least one controller command interface
US20200151134A1 (en) Bandwidth limiting in solid state drives
US8433833B2 (en) Dynamic reassignment for I/O transfers using a completion queue
US8856407B2 (en) USB redirection for write streams
US10951741B2 (en) Computer device and method for reading or writing data by computer device
US10795608B2 (en) Computer, communication driver, and communication control method
WO2020000485A1 (zh) 一种基于NVMe的数据写入方法、装置及系统
CN110362517B (zh) 用于动态地调整在计算设备和存储设备之间传输i/o请求的方式的技术
US9098431B2 (en) USB redirection for interrupt transactions
US20210089477A1 (en) Systems and methods for message tunneling
EP4044015A1 (en) Data processing method and apparatus
CN112463028B (zh) 一种i/o处理方法、系统、设备及计算机可读存储介质
WO2022151766A1 (zh) 一种io请求流水线处理设备、方法、系统及存储介质
CN112463027B (zh) 一种i/o处理方法、系统、设备及计算机可读存储介质
EP4254207A1 (en) Data processing apparatus and method, and related device
EP3771164B1 (en) Technologies for providing adaptive polling of packet queues
US11733917B2 (en) High bandwidth controller memory buffer (CMB) for peer to peer data transfer
US20240168876A1 (en) Solving submission queue entry overflow using metadata or data pointers
EP3361710B1 (en) Technologies for endpoint congestion avoidance
US20240126713A1 (en) System of and method for input output throttling in a network
US11372554B1 (en) Cache management system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17850102

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17850102

Country of ref document: EP

Kind code of ref document: A1