CN108762682B - Thread model - Google Patents

Thread model Download PDF

Info

Publication number
CN108762682B
CN108762682B CN201810553970.2A CN201810553970A CN108762682B CN 108762682 B CN108762682 B CN 108762682B CN 201810553970 A CN201810553970 A CN 201810553970A CN 108762682 B CN108762682 B CN 108762682B
Authority
CN
China
Prior art keywords
nvme
thread
ssd
model
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810553970.2A
Other languages
Chinese (zh)
Other versions
CN108762682A (en
Inventor
刘斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201810553970.2A priority Critical patent/CN108762682B/en
Publication of CN108762682A publication Critical patent/CN108762682A/en
Application granted granted Critical
Publication of CN108762682B publication Critical patent/CN108762682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses a thread model, which is used for ensuring balanced distribution of IO to threads and improving the execution efficiency of a software IO stack, so that the IO performance of the whole storage system is improved. The thread model provided by the embodiment of the application comprises: the system comprises a software IO stack, an NVME driver and an NVME SSD; the software IO stack comprises an IO thread scheduler and N threads, wherein N is an integer greater than 1; the NVME driver comprises M N pairs of submission queues SQ/completion queues CQ, each pair of SQ/CQ is respectively bound with one thread, and M is an integer greater than 1; the NVME SSD comprises M NVME SSDs, and each NVME SSD corresponds to N pairs of the SQ/CQ; the IO thread scheduler is used for distributing the IO acquired from the host to the threads in a balanced manner.

Description

Thread model
Technical Field
The application relates to the field of storage, in particular to a thread model.
Background
Solid state storage media are replacing older data centers. This generation of flash memory storage has significant advantages over conventional magnetic disk media in performance, power consumption, and disk shelf density, which will make flash memory storage the dominant candidate for the next generation of the storage market.
Compared with the traditional disk medium, the throughput and the delay performance of the Solid State Drive (SSD) are much higher, so that the proportion of the time occupied by the software part in the whole transaction processing time is greatly increased. In other words, the performance and efficiency of the software stack in the storage system becomes more important. With the further development of storage media, the existing software architecture will not be able to exploit its full capabilities, and storage media will continue to evolve at a rapid pace in the next few years.
For the transmission protocol used, SSD (hereinafter referred to as NVME PCIE SSD) which uses non-volatile memory interaction (NVME) protocol commands and loads the protocol commands on a pci-express (PCIE) transmission protocol has great advantages compared to the original serial ATA (SATA) SSD and serial SCSI (SAS) SSD, and specifically includes: 1. the performance is improved by several times; 2. the delay can be reduced by more than 50%; 3. NVME PCIE SSD, the number of times of read/write Operations Per Second (IOPs) that can be provided is ten times that of high-end enterprise-level SATA SSDs and SAS SSDs; 4. the automatic power consumption state switching and dynamic energy consumption management functions greatly reduce power consumption; 5. scalable capabilities to support technological development for the next decade, and the like.
The importance of performance and efficiency of a software Input Output (IO) stack in a storage system using NVME PCIE SSD will be even more pronounced. It can be said that the design of the software IO stack will directly determine the performance of the storage system using NVME PCIE SSD. In a multi-core system (multiple Central Processing Unit (CPU) cores), the most direct consideration for improving the execution efficiency of a software IO stack is thread model design.
Disclosure of Invention
The embodiment of the application provides a thread model, which is used for ensuring balanced distribution of IO to threads and improving the execution efficiency of a software IO stack, so that the IO performance of the whole storage system is improved.
A first aspect of an embodiment of the present application provides a thread model, where the thread model includes: the system comprises a software input/output (IO) stack, a nonvolatile storage interaction (NVME) drive and an NVME Solid State Disk (SSD); the software IO stack comprises an IO thread scheduler and N threads, wherein N is an integer greater than 1; the NVME driver comprises M N pairs of submission queues SQ/completion queues CQ, each pair of SQ/CQ is respectively bound with one thread, and M is an integer greater than 1; the NVME SSD comprises M NVME SSDs, and each NVME SSD corresponds to N pairs of the SQ/CQ; the IO thread scheduler is used for distributing the IO acquired from the host to the threads in a balanced manner.
In a possible design, in a first implementation manner of the first aspect of the embodiment of the present application, the thread to which the IO is assigned is a target thread.
In a possible design, in a second implementation manner of the first aspect of the embodiment of the present application, the IO thread scheduler is specifically configured to allocate, according to a serial number of the IO, the IO to one of N threads corresponding to a target NVME SSD in a polling manner, where the target NVME SSD is an NVME SSD specified by the host.
In a possible design, in a third implementation manner of the first aspect of the embodiment of the present application, the thread is configured to issue the IO to a SQ corresponding to the target thread in the NVME drive.
In a possible design, in a fourth implementation manner of the first aspect of the embodiment of the present application, the SQ is configured to write an IO from the host, and issue the IO to the target NVME SSD in the NVME SSD.
In a possible design, in a fifth implementation manner of the first aspect of the embodiment of the present application, the target NVME SSD is configured to process the IO, obtain a processing result, and send the processing result to a CQ in the SQ/CQ.
In a possible design, in a sixth implementation manner of the first aspect of the embodiment of the present application, the CQ is configured to receive the processing result from the target NVME SSD, and send the processing result to the target thread corresponding to the CQ.
In a possible design, in a seventh implementation manner of the first aspect of the embodiment of the present application, the NVME driver is configured to take the processing result out of the CQ and send the processing result to the host.
In a possible design, in an eighth implementation manner of the first aspect of the embodiment of the present application, the thread model is a thread model of a multi-CPU core storage system.
In a possible design, in a ninth implementation manner of the first aspect of the embodiment of the present application, the multi-CPU core storage system is a system using NVME PCIE SSD.
Yet another aspect of the present application provides a computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to perform the method of the above-described aspects.
Yet another aspect of the present application provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the above-described aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
the thread model provided by the application comprises: the system comprises a software input/output (IO) stack, a nonvolatile storage interaction (NVME) drive and an NVME Solid State Disk (SSD); the software IO stack comprises an IO thread scheduler and N threads, wherein N is an integer greater than 1; the NVME driver comprises M N pairs of submission queues SQ/completion queues CQ, each pair of SQ/CQ is respectively bound with one thread, and M is an integer greater than 1; the NVME SSD comprises M NVME SSDs, and each NVME SSD corresponds to N pairs of the SQ/CQ; the IO thread scheduler is used for distributing the IO acquired from the host to the threads in a balanced manner. The thread model in the application can ensure balanced distribution of IO to the threads, and improves the execution efficiency of the software IO stack, thereby improving the IO performance of the whole storage system.
Drawings
FIG. 1 is a diagram of an embodiment of a thread model provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of another embodiment of a thread model provided in an embodiment of the present application;
fig. 3 is a schematic diagram illustrating an embodiment of a possible computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a thread model, which is used for ensuring balanced distribution of IO to threads and improving the execution efficiency of a software IO stack, so that the IO performance of the whole storage system is improved.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a thread model according to an embodiment of the present disclosure, wherein the thread model includes: a software IO stack 101, an NVME driver 102, and an NVME SSD 103;
the software IO stack 101 includes an IO thread scheduler 1011 and N threads (e.g., TH01012, TH 110132, TH 21014, and TH31015 in the figure, where N is an integer greater than 1, and N is 4 in the embodiment of the present application for example;
the NVME driver 102 includes M × N pairs of commit queues (SQ)/Completion Queues (CQ), each pair of SQ/CQ is respectively bound to one of the threads, where M is an integer greater than 1, and M is exemplified as 2 in the embodiment of the present application;
the NVME SSDs 103 include M NVME SSDs (e.g., NVME SSDs 1031 and 1032 in the figure), each of which corresponds to N pairs of the SQ/CQ;
as shown in FIG. 1, for NVME SSD1031, there are 4 SQ/CQ pairs, SQ/CQ1021, SQ/CQ1022, SQ/CQ1023, SQ/CQ1024, where SQ/CQ1021 binds TH01012, SQ/CQ1022 binds TH 11013, SQ/CQ1023 binds TH 21014, SQ/CQ1024 binds 31015.
For NVME SSD1032, there are also 4 SQ/CQ pairs, SQ/CQ1025, SQ/CQ1026, SQ/CQ1027, SQ/CQ1028 respectively, where SQ/CQ1025 binds TH01012, SQ/CQ1026 binds TH 11013, SQ/CQ1027 binds TH 21014, SQ/CQ1028 binds TH 31015.
The IO thread scheduler 1011 is configured to allocate IO obtained from the host to the thread in a balanced manner.
Optionally, the thread to which the IO is assigned is a target thread.
Optionally, the IO thread scheduler is specifically configured to allocate the IO to one of N threads corresponding to a target NVME SSD according to a serial number of the IO in a polling manner, where the target NVME SSD is an NVME SSD specified by the host.
The application also provides a polling calculation method of the IO thread scheduler, where a formula of the calculation method is that THx is index% N, since N is 4 in the embodiment of the application, x is a remainder obtained by dividing a sequence number by 4, if a sequence number of IO is 2, THx is 2/4, 0 … 2, TH2, and at this time, the IO thread scheduler allocates IO to TH 2; if the serial number of IO is 13, THx is 13/4, TH is 3 … 1, TH is 1, and the IO thread scheduler assigns IO to TH 1.
Optionally, the thread is configured to issue the IO to an SQ corresponding to the target thread in the NVME drive.
Optionally, the SQ is configured to write an IO from the host, and issue the IO to the target NVME SSD of the NVME SSDs.
Optionally, the target NVME SSD is configured to process the IO, obtain a processing result, and send the processing result to a CQ in the SQ/CQ.
Optionally, the CQ is configured to receive the processing result from the target NVME SSD and send the processing result to the target thread corresponding to the CQ.
Optionally, the NVME driver is configured to take the processing result out of the CQ and send the processing result to the host.
Optionally, the thread model is a thread model of a multi-CPU core storage system.
Optionally, the multi-CPU core storage system is a system using NVME PCIE SSD.
Specifically, in the embodiment of the present application, the NVME protocol specifies two types of IO queues for the NVME SSD, one is called SQ and the other is called CQ, the Host writes an IO command into SQ, then the NVME SSD takes out the IO command from SQ to execute, after the IO command is executed, the NVME SSD writes an IO command completion result into CQ, and the Host takes out the IO command completion result from CQ to process the IO command completion result. In the storage subsystem with 4 threads for NVME IO as shown in FIG. 1, the NVME driver has 4 SQ/CQ pairs for each back-end NVME SSD, and each SQ/CQ pair is bound to one thread.
The thread model in the present application can be illustrated by the following steps:
1. the HOST terminal issues the first IO (serial number of IO is 0) to the storage system (assuming that this IO is issued to NVMe SSD _ 2);
2. after being scheduled by an IO thread scheduler through a software IO stack in the storage system, the IO is scheduled to a TH0 thread, and then the IO is issued to an NVME driver layer.
3. Since in 1, we assume that this IO is issued to NVME SSD _2, this IO is sent to SQ0 bound with TH0 in NVME SSD _2, and then wait for the IO to be taken away by NVME SSD _2 for execution;
4. after the execution of the IO by NVME SSD _2 is completed, NVME SSD _2 returns the IO execution result to CQ0 bound to TH0 in NVME SSD _ 2;
5. the NVME driver takes out the IO execution result from the CQ0, and finally feeds the execution result back to the HOST end through the TH0 thread;
6. then when the HOST terminal issues a second IO (the serial number of the IO is 1) to the storage system (assuming that the IO is issued to NVME SSD _ 1);
7. after being scheduled by an IO thread scheduler through a software IO stack of the storage system, the IO is scheduled to a TH1 thread, and then the IO is issued to an NVME driver layer.
8. Since in 6, we assume that the IO is issued to NVME SSD _1, the IO is sent to SQ1 bound with TH1 in NVME SSD _1, and then wait for the IO to be taken away by NVME SSD _1 for execution;
9. after the execution of the IO by NVME SSD _1 is completed, NVME SSD _1 returns the IO execution result to CQ1 bound to TH1 in NVME SSD _ 1;
10. the NVME driver takes out the IO execution result from the CQ1, and finally feeds the execution result back to the HOST end through the TH1 thread;
11. and then the HOST terminal issues and processes the third IO, the fourth IO and the like, and the execution is continued according to the steps.
Of course these IOs are actually executed concurrently.
It can be seen from the above thread model analysis that each life cycle of IO has only one thread to process, so that the whole execution process does not need to be locked, and because of the processing of the IO thread scheduler, IO can be guaranteed to be allocated to all thread threads available for NVME IO in the system in a balanced manner, and the maximum efficient utilization of resources is guaranteed. Through the design of the lock-free aspect and the IO balance aspect, the software IO stack performance of the multi-core storage system using NVME PCIE SSD can be guaranteed to be optimal, and therefore the IO performance of the whole storage system is improved.
It should be noted that, the thread scheduler in the present application ensures that IO is allocated to all thread threads available for NVME IO in the system in a balanced manner; and each NVME SSD has SQ/CQ pairs with the same number as the number of the IO threads available for the NVME, and each SQ/CQ pair is bound to one thread, so that each IO is processed by a single thread, and the lock-free design is realized.
The thread model provided by the application comprises: the system comprises a software input/output (IO) stack, a nonvolatile storage interaction (NVME) drive and an NVME Solid State Disk (SSD); the software IO stack comprises an IO thread scheduler and N threads, wherein N is an integer greater than 1; the NVME driver comprises M N pairs of submission queues SQ/completion queues CQ, each pair of SQ/CQ is respectively bound with one thread, and M is an integer greater than 1; the NVME SSD comprises M NVME SSDs, and each NVME SSD corresponds to N pairs of the SQ/CQ; the IO thread scheduler is used for distributing the IO acquired from the host to the threads in a balanced manner. The thread model in the application can ensure balanced distribution of IO to the threads, and improves the execution efficiency of the software IO stack, thereby improving the IO performance of the whole storage system.
Referring to fig. 2, the thread model in the embodiment of the present application is described in detail below from the perspective of hardware processing, and an embodiment of the thread model 200 in the embodiment of the present application includes:
an input device 201, an output device 202, a processor 203 and a memory 204 (wherein the number of the processors 203 may be one or more, and one processor 203 is taken as an example in fig. 2). In some embodiments of the present application, the input device 201, the output device 202, the processor 203 and the memory 204 may be connected by a bus or other means, wherein fig. 2 illustrates the connection by the bus.
Referring to fig. 3, fig. 3 is a schematic diagram of an embodiment of a computer-readable storage medium according to the present application.
As shown in fig. 3, the present embodiment provides a computer-readable storage medium 300, on which a computer program 311 is stored, the computer program 311 realizing the following steps when executed by a processor:
and allocating the IO to one thread of the N threads corresponding to the target NVME SSD according to the serial number of the IO in a polling mode.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A threading model, the threading model comprising: the system comprises a software input/output (IO) stack, a nonvolatile storage interaction (NVME) driver and an NVME (network video disk) SSD;
the software input/output IO stack comprises an IO thread scheduler and N threads, wherein N is an integer greater than 1;
the NVME driver comprises M N pairs of submission queues SQ/completion queues CQ, each pair of SQ/CQ is respectively bound with one thread, and M is an integer greater than 1;
the NVME SSD comprises M NVME SSDs, and each NVME SSD corresponds to N pairs of the SQ/CQ;
the IO thread scheduler is used for distributing the IO acquired from the host to the threads in a balanced manner.
2. The threading model of claim 1, wherein the thread to which the IO is assigned is a target thread.
3. The thread model of claim 2, wherein the IO thread scheduler is specifically configured to allocate the IO to one of N threads corresponding to a target NVME SSD according to a serial number of the IO in a polling manner, and the target NVME SSD is an NVME SSD specified by the host.
4. The threading model of claim 3, wherein the thread is configured to issue the IO to a SQ in the NVME driver corresponding to the target thread.
5. The threading model of claim 4, wherein the SQ is configured to write an IO from the host and issue the IO to the target one of the NVME SSDs.
6. The threading model of claim 5, wherein the target NVME SSD is configured to process the IO, obtain a processing result, and send the processing result to a CQ of the SQ/CQ.
7. The thread model of claim 6, wherein the CQ is configured to receive the processing result from a destination NVME SSD and send the processing result to the destination thread corresponding to the CQ.
8. The threading model of claim 7, wherein the NVME driver is configured to fetch the processing result from the CQ and send the processing result to the host.
9. The threading model of any of claims 1 to 8, wherein the threading model is a threading model of a multi-CPU core storage system.
10. The threading model of claim 9, wherein said multi-CPU core storage system is a system using NVME PCIE SSD.
CN201810553970.2A 2018-05-31 2018-05-31 Thread model Active CN108762682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810553970.2A CN108762682B (en) 2018-05-31 2018-05-31 Thread model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810553970.2A CN108762682B (en) 2018-05-31 2018-05-31 Thread model

Publications (2)

Publication Number Publication Date
CN108762682A CN108762682A (en) 2018-11-06
CN108762682B true CN108762682B (en) 2021-06-29

Family

ID=64001826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810553970.2A Active CN108762682B (en) 2018-05-31 2018-05-31 Thread model

Country Status (1)

Country Link
CN (1) CN108762682B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110888727B (en) * 2019-11-26 2022-07-22 北京达佳互联信息技术有限公司 Method, device and storage medium for realizing concurrent lock-free queue

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9720860B2 (en) * 2014-06-06 2017-08-01 Toshiba Corporation System and method for efficient processing of queued read commands in a memory system
WO2016101282A1 (en) * 2014-12-27 2016-06-30 华为技术有限公司 Method, device and system for processing i/o task
KR102430187B1 (en) * 2015-07-08 2022-08-05 삼성전자주식회사 METHOD FOR IMPLEMENTING RDMA NVMe DEVICE
US10379745B2 (en) * 2016-04-22 2019-08-13 Samsung Electronics Co., Ltd. Simultaneous kernel mode and user mode access to a device using the NVMe interface

Also Published As

Publication number Publication date
CN108762682A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
US11132318B2 (en) Dynamic allocation of resources of a storage system utilizing single root input/output virtualization
US11698876B2 (en) Quality of service control of logical devices for a memory sub-system
US20200089537A1 (en) Apparatus and method for bandwidth allocation and quality of service management in a storage device shared by multiple tenants
US8402470B2 (en) Processor thread load balancing manager
US8966130B2 (en) Tag allocation for queued commands across multiple devices
US9223373B2 (en) Power arbitration for storage devices
US20220269434A1 (en) Utilization based dynamic shared buffer in data storage system
US11520715B2 (en) Dynamic allocation of storage resources based on connection type
KR20200065489A (en) Apparatus and method for daynamically allocating data paths in response to resource usage in data processing system
CN112017700A (en) Dynamic power management network for memory devices
US9104496B2 (en) Submitting operations to a shared resource based on busy-to-success ratios
US9201598B2 (en) Apparatus and method for sharing resources between storage devices
Koh et al. Faster than flash: An in-depth study of system challenges for emerging ultra-low latency SSDs
CN108762682B (en) Thread model
US10901733B1 (en) Open channel vector command execution
KR20150116627A (en) Controller and data storage device including the same
US20220391333A1 (en) Connection Virtualization for Data Storage Device Arrays
JP2019164510A (en) Storage system and IO processing control method
CN113076138B (en) NVMe command processing method, device and medium
US11030007B2 (en) Multi-constraint dynamic resource manager
US20220391136A1 (en) Managing Queue Limit Overflow for Data Storage Device Arrays
US9612748B2 (en) Volume extent allocation
KR20220135786A (en) Apparatus and method for scheduing operations performed in plural memory devices included in a memory system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant