CN111679795B - Lock-free concurrent IO processing method and device - Google Patents

Lock-free concurrent IO processing method and device Download PDF

Info

Publication number
CN111679795B
CN111679795B CN202010643795.3A CN202010643795A CN111679795B CN 111679795 B CN111679795 B CN 111679795B CN 202010643795 A CN202010643795 A CN 202010643795A CN 111679795 B CN111679795 B CN 111679795B
Authority
CN
China
Prior art keywords
request
thread
command
address area
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010643795.3A
Other languages
Chinese (zh)
Other versions
CN111679795A (en
Inventor
易正利
吴忠杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Priority to CN202010643795.3A priority Critical patent/CN111679795B/en
Publication of CN111679795A publication Critical patent/CN111679795A/en
Application granted granted Critical
Publication of CN111679795B publication Critical patent/CN111679795B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method and a device for processing concurrent IO requests without locks are provided. The IO request processing method of the storage system provided, the storage system includes a plurality of virtual storage disks, the virtual storage disks include a plurality of logic address areas, the logic addresses of the plurality of logic address areas are not overlapped, the method includes: receiving a first IO request, wherein the first IO request accesses a first logic address area; determining a first thread according to the first logic address area, and enabling the first thread to process the first IO request; receiving a second IO request, wherein the second IO request accesses a second logical address area; and determining a second thread according to the second logic address area, and enabling the second thread to process the second IO request.

Description

Lock-free concurrent IO processing method and device
Technical Field
The present disclosure relates to storage system technologies, and in particular, to a method and an apparatus for processing an IO request of a storage system.
Background
The solid state drive (SSD, solid State Drive) is fabricated using semiconductor storage media, and has excellent read and write performance. However, although SSD drives perform very well, data reliability and cost of SSD drives limit the popularity of SSD disks. In the prior art, RAID (Redundant Array of Independent Disks ) technology is adopted to ensure the reliability of SSD driver data, and simultaneously, the use efficiency of the SSD driver is improved, thereby reducing the cost.
However, RAID techniques lengthen the IO path and increase the computational overhead. To fully exploit the capabilities of multi-block SSD drives, multi-core multi-CPU technology is currently commonly employed. As long as the CPUs are allowed to concurrently process IO request requests as much as possible, the aims of data protection and high performance can be achieved. However, there are associations between read requests, write requests, and reconstruct requests for the same stripe on the same RAID Array, and there are dependencies with some management control requests of the RAID Array, i.e., synchronous and mutually exclusive accesses are required for shared resources.
Currently, there is no unified way to resolve synchronization and mutual exclusion between related requests on a RAID Array. Most implementations create a separate thread for each RAID Array, specifically handling management control requests associated with the Array, as well as read requests, write requests, and reconstruct requests. However, when a RAID system creates thousands of RAID arrays in a resource pool composed of one SSD drive, if this method is adopted, thousands of threads will be created, resulting in a huge memory overhead. Moreover, the overhead of thread scheduling and switching can lead to a dramatic drop in CPU efficiency.
Disclosure of Invention
The present application aims to solve, at least to some extent, the technical problems in the related art described above.
According to a first aspect of the present application, there is provided an IO request processing method of a storage system according to the first aspect of the present application, the storage system including a plurality of virtual storage disks including a plurality of logical address areas, the method comprising: receiving a first IO request, wherein the first IO request accesses a first logic address area; a first thread is determined from the first logical address region, causing the first thread to process the first IO request.
According to the method for processing the IO request of the storage system of the first aspect of the present application, there is provided the method for processing the IO request of the second storage system of the first aspect of the present application, wherein the logical addresses of the plurality of logical address areas do not overlap with each other, and the method further includes: receiving a second IO request, wherein the second IO request accesses a second logical address area; and determining a second thread according to the second logic address area, and enabling the second thread to process the second IO request.
According to a third aspect of the present application, there is provided an IO request processing method of a storage system according to the first aspect of the present application, wherein the determining, according to a first logical address area, a first thread, and causing the first thread to process the first IO request includes: and generating a first IO command according to the first IO request, filling the first IO command into a first queue corresponding to the first thread according to the first logic address area, and taking out the first thread from the first queue and processing the first IO command, wherein the first thread only processes the command of the first queue.
According to a third aspect of the present application, there is provided a method for processing an IO request in a storage system according to the first aspect of the present application, wherein the determining, according to a second logical address area, a second thread, and causing the second thread to process the second IO request includes: and generating a second IO command according to the second IO request, filling the second IO command into a second queue corresponding to the second thread according to the second logic address area, and taking out the second thread from the second queue and processing the second IO command, wherein the second thread only processes the command of the second queue.
According to the method for processing the IO request of the storage system in the first aspect of the application, the method for processing the IO request of the fifth storage system in the first aspect of the application is provided, and the method further comprises the following steps: taking the result of modulo the number of threads by the index of the first logical address area as the index of the first thread; or the result of modulo the index of the second logical address area to the number of threads is the index of the second thread.
According to the method for processing the IO request of the storage system in the first aspect of the application, the method for processing the IO request of the sixth storage system in the first aspect of the application is provided, and the method further comprises the following steps: calculating hash on the index of the first logic address area to obtain the index of the first thread; or calculating hash on the index of the second logic address area to obtain the index of the second thread.
According to the method for processing the IO request of the storage system in the first aspect of the application, the seventh method for processing the IO request of the storage system in the first aspect of the application is provided, and the method further comprises the following steps: taking the result of modulo the index of the first logic address area to the number of threads as the index of the first queue; or the result of modulo the index of the second logical address area to the number of threads is the index of the second queue.
According to the method for processing the IO request of the storage system in the first aspect of the application, there is provided an eighth method for processing the IO request of the storage system in the first aspect of the application, further comprising: calculating hash on the index of the first logic address area to obtain the index of the first queue; or calculating hash on the index of the second logic address area to obtain the index of the second queue.
According to the method for processing the IO request of the storage system in the first aspect of the application, the method for processing the IO request of the ninth storage system in the first aspect of the application is provided, and the method further comprises the following steps: determining a first mapping table entry according to the first logical address area; the first thread accesses a first mapping table entry, obtains a first storage object from the first mapping table entry according to a first logic address accessed by a first IO request, and accesses the first storage object to process the first IO request.
According to the method for processing the IO request of the storage system in the first aspect of the application, the method for processing the IO request of the tenth storage system in the first aspect of the application is provided, and the method further comprises the following steps: determining a second mapping table entry according to the second logical address area; and the second thread accesses a second mapping table entry, acquires a second storage object from the second mapping table entry according to a second logic address accessed by a second IO request, and accesses the second storage object to process the second IO request.
According to the method for processing the IO request of the storage system in the first aspect of the application, the eleventh method for processing the IO request in the storage system in the first aspect of the application is provided, and the method further comprises the following steps: the result of modulo the index of the first logic address area to the number of the mapping table entries is the index of the first mapping table entry; or calculating hash on the index of the first logic address area to obtain the index of the first mapping table item.
According to the method for processing the IO request of the storage system in the first aspect of the application, the method for processing the IO request of the twelfth storage system in the first aspect of the application is provided, and the method further comprises the following steps: the result of modulo the index of the second logic address area to the number of the mapping table entries is the index of the second mapping table entry; or calculating hash on the index of the second logic address area to obtain the index of the second mapping table entry.
According to the method for processing the IO request of the storage system in the first aspect of the application, the thirteenth method for processing the IO request of the storage system in the first aspect of the application is provided, and the method further comprises the following steps: if the storage object is not available from the first mapping table entry, for the write request in the IO request, creating a third storage object, recording the third storage object in the first mapping table entry, and writing data into the third storage object.
According to the method for processing the IO request of the storage system in the first aspect of the application, the method for processing the IO request of the fourteenth storage system in the first aspect of the application is provided, and the method further comprises the following steps: if the storage object is not available from the second mapping table entry, for the write request in the IO request, creating a fourth storage object, recording the fourth storage object in the second mapping table entry, and writing data into the fourth storage object.
According to a second aspect of the present application, there is provided an IO request processing method of a storage system according to the second aspect of the present application, the storage system including a plurality of virtual storage disks including a plurality of logical address areas, the method comprising: receiving a first IO request; generating a first IO command and a second IO command according to the first IO request, wherein the first IO command accesses the first logic address area, and the second IO command accesses the second logic address area; determining a first thread according to the first logic address area, and enabling the first thread to process the first IO command; and determining a second thread according to the second logic address area, so that the second thread processes the second IO command.
According to the method for processing the IO request of the storage system in the second aspect of the application, the method for processing the IO request of the storage system in the second aspect of the application is provided, and the method further comprises the following steps: filling a first IO command into a first queue corresponding to a first thread according to the first logic address area, and taking out the first IO command from the first queue and processing the first IO command by the first thread, wherein the first thread only processes the command of the first queue; and filling the second IO command into a second queue corresponding to the second thread according to the second logic address area, and taking out the second IO command from the second queue and processing the second IO command by the second thread, wherein the second thread only processes the command of the second queue.
According to the method for processing the IO request of the storage system in the second aspect of the application, the method for processing the IO request of the third storage system in the second aspect of the application is provided, and the method further comprises the following steps: determining a first mapping table entry according to the first logical address area; the first thread accesses a first mapping table entry, acquires a first storage object from the first mapping table entry according to a first logic address accessed by a first IO command, and accesses the first storage object to process the first IO command.
According to the method for processing the IO request of the storage system in the second aspect of the application, there is provided a fourth method for processing the IO request of the storage system in the second aspect of the application, which further comprises: determining a second mapping table entry according to the second logical address area; and the second thread accesses a second mapping table entry, acquires a second storage object from the second mapping table entry according to a second logic address accessed by a second IO command, and accesses the second storage object to process the second IO command.
According to the method for processing the IO request of the storage system in the second aspect of the application, there is provided a fifth method for processing the IO request of the storage system in the second aspect of the application, which further comprises: the result of modulo the index of the first logic address area to the number of the mapping table entries is the index of the first mapping table entry; or calculating hash on the index of the first logic address area to obtain the index of the first mapping table item.
According to the method for processing the IO request of the storage system in the second aspect of the application, there is provided the method for processing the IO request of the sixth storage system in the second aspect of the application, which further comprises: the result of modulo the index of the second logic address area to the number of the mapping table entries is the index of the second mapping table entry; or calculating hash on the index of the second logic address area to obtain the index of the second mapping table entry.
According to a seventh aspect of the present application, there is provided an IO request processing method of a storage system according to the second aspect of the present application, wherein for a write request in an IO request, a third storage object is created, the third storage object is recorded in a first mapping table entry, and data is written to the third storage object.
According to an IO request processing method of a storage system of a second aspect of the present application, there is provided an IO request processing method of an eighth storage system according to the second aspect of the present application, wherein for a write request in an IO request, a fourth storage object is created, the fourth storage object is recorded in a second mapping table entry, and data is written to the fourth storage object.
According to an IO request processing method of a storage system of a second aspect of the present application, there is provided an IO request processing method of a ninth storage system according to the second aspect of the present application, wherein the number of mapping entries is an integer multiple of the number of threads that process IO commands.
According to the IO request processing method of the storage system of the second aspect of the present application, there is provided the IO request processing method of the tenth storage system according to the second aspect of the present application, further including: if the storage object cannot be obtained from the first mapping table entry, for the read request in the IO request, returning a result indicating that the read request is abnormal.
According to the method for processing the IO request of the storage system in the second aspect of the application, the eleventh method for processing the IO request of the storage system in the second aspect of the application is provided, and the method further comprises the following steps: and if the storage object cannot be obtained from the second mapping table entry, returning a result indicating that the read request is abnormal for the read request in the IO request.
According to an IO request processing method of a storage system of a second aspect of the present application, there is provided an IO request processing method of a twelfth storage system according to the second aspect of the present application, wherein the first thread is executed by only the first CPU and the second thread is executed by only the second CPU.
According to a third aspect of the present application, there is provided an IO request processing apparatus of a storage system according to the third aspect of the present application, the storage system including a plurality of virtual storage disks including a plurality of logical address areas, the apparatus comprising: the first receiving module is used for receiving a first IO request, wherein the first IO request accesses a first logic address area; and the first processing module is used for determining a first thread according to the first logic address area so that the first thread processes the first IO request.
According to an IO request processing device of a storage system of a third aspect of the present application, there is provided an IO request processing device of a second storage system of the third aspect of the present application, the logical addresses of the plurality of logical address areas do not overlap with each other, and the first receiving module is further configured to receive a second IO request, where the second IO request accesses the second logical address area; the first processing module is further configured to determine a second thread according to the second logical address area, so that the second thread processes the second IO request.
According to an IO request processing device of a storage system in a third aspect of the application, the first processing module is used for generating a first IO command according to a first IO request and filling the first IO command into a first queue corresponding to a first thread according to a first logic address area, the first thread takes out of the first queue and processes the first IO command, and the first thread only processes the commands of the first queue.
According to an IO request processing device of a storage system of a third aspect of the present application, there is provided an IO request processing device of a fourth storage system of the third aspect of the present application, where the first processing module is configured to generate a second IO command according to a second IO request, and fill the second IO command into a second queue corresponding to a second thread according to a second logical address area, where the second thread fetches and processes the second IO command from the second queue, and the second thread processes only the command of the second queue.
The IO request processing apparatus of the storage system according to the third aspect of the present application provides the IO request processing apparatus of the fifth storage system according to the third aspect of the present application, further comprising: the first index calculation module is used for taking the result of modulo the number of threads by the index of the first logic address area as the index of the first thread; or the result of modulo the index of the second logical address area to the number of threads is the index of the second thread.
According to the IO request processing device of the storage system of the third aspect of the present application, there is provided the IO request processing device of the sixth storage system according to the third aspect of the present application, further comprising: the second index calculation module is used for calculating hash on the index of the first logic address area to obtain the index of the first thread; or calculating hash on the index of the second logic address area to obtain the index of the second thread.
The IO request processing apparatus of the storage system according to the third aspect of the present application provides the IO request processing apparatus of the seventh storage system according to the third aspect of the present application, further comprising: the third index calculation module is used for taking the result of modulo the number of threads by the index of the first logic address area as the index of the first queue; or the result of modulo the index of the second logical address area to the number of threads is the index of the second queue.
The IO request processing apparatus of the storage system according to the third aspect of the present application provides the IO request processing apparatus of the eighth storage system according to the third aspect of the present application, further comprising: a fourth index calculation module, configured to calculate a hash of the index of the first logical address area, to obtain an index of the first queue; or calculating hash on the index of the second logic address area to obtain the index of the second queue.
According to the IO request processing device of the storage system of the third aspect of the present application, there is provided the IO request processing device of the ninth storage system according to the third aspect of the present application, further comprising: the first mapping table determining module is used for determining a first mapping table entry according to the first logic address area; the first processing module is further configured to access, through the first thread, a first mapping table entry, obtain a first storage object from the first mapping table entry according to a first logical address accessed by a first IO request, and access the first storage object to process the first IO request.
The IO request processing apparatus of the storage system according to the third aspect of the present application provides the IO request processing apparatus of the tenth storage system according to the third aspect of the present application, further comprising: the second mapping table determining module is used for determining a second mapping table entry according to the second logic address area; the first processing module is further configured to access a second mapping table entry through the second thread, obtain a second storage object from the second mapping table entry according to a second logical address accessed by a second IO request, and access the second storage object to process the second IO request.
According to the IO request processing device of the storage system of the third aspect of the present application, there is provided the IO request processing device of the eleventh storage system according to the third aspect of the present application, further comprising: a fifth index calculation module, configured to modulo the number of mapping table entries by the index of the first logical address area, to obtain the index of the first mapping table entry; or calculating hash on the index of the first logic address area to obtain the index of the first mapping table item.
The IO request processing apparatus of the storage system according to the third aspect of the present application provides the IO request processing apparatus of the twelfth storage system according to the third aspect of the present application, further comprising: the sixth index calculation module is configured to modulo the number of mapping table entries by the index of the second logical address area to obtain the index of the second mapping table entry; or calculating hash on the index of the second logic address area to obtain the index of the second mapping table entry.
According to an IO request processing device of a storage system of a third aspect of the present application, there is provided an IO request processing device of a thirteenth storage system of the third aspect of the present application, the first processing module is further configured to, when a storage object cannot be obtained from the first mapping table entry, create a third storage object for a write request in the IO request, record the third storage object in the first mapping table entry, and write data to the third storage object.
According to an IO request processing device of a storage system of a third aspect of the present application, there is provided an IO request processing device of a fourteenth storage system of the third aspect of the present application, the first processing module is further configured to, when a storage object cannot be obtained from the second mapping table entry, create a fourth storage object for a write request in the IO request, record the fourth storage object in the second mapping table entry, and write data to the fourth storage object.
According to a fourth aspect of the present application, there is provided an IO request processing apparatus of a storage system according to the fourth aspect of the present application, the storage system including a plurality of virtual storage disks including a plurality of logical address areas, the apparatus comprising: the second receiving module is used for receiving the first IO request; the second processing module is used for generating a first IO command and a second IO command according to the first IO request, wherein the first IO command accesses a first logic address area, the second IO command accesses a second logic address area, and a first thread is determined according to the first logic address area, so that the first thread processes the first IO command; and determining a second thread according to the second logic address area, so that the second thread processes the second IO command.
According to an IO request processing device of a storage system of a fourth aspect of the present application, there is provided an IO request processing device of a second storage system of the fourth aspect of the present application, the second processing module is configured to fill a first IO command into a first queue corresponding to a first thread according to a first logical address area, the first thread fetches and processes the first IO command from the first queue, wherein the first thread processes only the command of the first queue, and fill a second IO command into a second queue corresponding to the second thread according to a second logical address area, and the second thread fetches and processes the second IO command from the second queue, wherein the second thread processes only the command of the second queue.
The IO request processing device of the storage system according to the fourth aspect of the present application provides the IO request processing device of the third storage system according to the fourth aspect of the present application, further including: the third mapping module is used for determining a first mapping table entry according to the first logic address area; the second processing module is further configured to access, through the first thread, a first mapping table entry, obtain a first storage object from the first mapping table entry according to a first logical address accessed by a first IO command, and access the first storage object to process the first IO command.
The IO request processing device of the storage system according to the fourth aspect of the present application provides the IO request processing device of the fourth storage system according to the fourth aspect of the present application, further including: the fourth mapping module is used for determining a second mapping table entry according to the second logic address area; the second processing module is further configured to access a second mapping table entry through the second thread, obtain a second storage object from the second mapping table entry according to a second logical address accessed by a second IO command, and access the second storage object to process the second IO command.
The IO request processing device of the storage system according to the fourth aspect of the present application provides the IO request processing device of the fifth storage system according to the fourth aspect of the present application, further including: a seventh index calculation module, configured to modulo the number of mapping table entries by the index of the first logical address area, to obtain the index of the first mapping table entry; or calculating hash on the index of the first logic address area to obtain the index of the first mapping table item.
The IO request processing device of the storage system according to the fourth aspect of the present application provides the IO request processing device of the sixth storage system according to the fourth aspect of the present application, further including: an eighth index calculation module, configured to modulo the number of mapping table entries by the index of the second logical address area, to obtain the index of the second mapping table entry; or calculating hash on the index of the second logic address area to obtain the index of the second mapping table entry.
According to an IO request processing device of a storage system of a fourth aspect of the present application, there is provided an IO request processing device of a seventh storage system of the fourth aspect of the present application, wherein for a write request in the IO request, a third storage object is created, the third storage object is recorded in a first mapping table entry, and data is written to the third storage object.
According to an IO request processing device of a storage system of a fourth aspect of the present application, there is provided an IO request processing device of an eighth storage system of the fourth aspect of the present application, wherein for a write request in the IO request, a fourth storage object is created, the fourth storage object is recorded in a second mapping table entry, and data is written to the fourth storage object.
According to an IO request processing device of a storage system of a fourth aspect of the present application, there is provided an IO request processing device of a ninth storage system of the fourth aspect of the present application, wherein the number of mapping entries is an integer multiple of the number of threads that process the IO command.
The IO request processing device of the storage system according to the fourth aspect of the present application provides the IO request processing device of the tenth storage system according to the fourth aspect of the present application, further including: the second processing module is further configured to, if the storage object cannot be obtained from the first mapping table entry, return a result indicating that the read request is abnormal to the read request in the IO request.
The IO request processing apparatus of the storage system according to the fourth aspect of the present application provides the IO request processing apparatus of the eleventh storage system according to the fourth aspect of the present application, further comprising: the second processing module is further configured to, if the storage object cannot be obtained from the second mapping table entry, return a result indicating that the read request is abnormal to the read request in the IO request.
According to an IO request processing device of a storage system of a fourth aspect of the present application, there is provided an IO request processing device of a twelfth storage system according to the fourth aspect of the present application, wherein the first thread is executed by only the first CPU and the second thread is executed by only the second CPU.
According to a fifth aspect of the present application there is provided a computer program comprising computer program code which, when loaded into and executed on a computer system, causes the computer system to perform the IO request processing method of a storage system provided according to the first to second aspects of the present application.
According to a sixth aspect of the present application there is provided a program comprising program code which, when loaded into and executed on a storage system, causes the storage system to perform the IO request processing method of the storage system provided according to the first to second aspects of the present application.
In the embodiment of the application, the logic address space of the virtual storage disk is divided into a plurality of areas which are not overlapped with each other, and IO requests of one area are processed by a corresponding thread. Avoiding two threads from handling IO requests of the same region. Embodiments of the present application have the following advantages: on the premise of ensuring the reliability of data, all CPUs can be completely concurrent, and the high performance of the solid state disk is fully exerted; ensuring the linear scalability of system performance; the dynamic configuration can be carried out according to the performance requirement and the use requirement of resources such as CPU, memory and the like.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, wherein:
FIG. 1 illustrates an architecture of a storage system according to an embodiment of the present application;
FIG. 2 illustrates a structure of a memory object according to an embodiment of the present application;
FIG. 3 illustrates a structure of a storage object according to yet another embodiment of the present application;
FIG. 4 is a schematic diagram of a lock-free IO request processing model in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of a mapping of logical address space of a virtual storage disk to storage objects according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a mapping of logical address space of a virtual storage disk to storage objects according to another embodiment of the present application;
FIG. 7 is a schematic diagram of logical address areas of a virtual storage disk according to an embodiment of the present application;
FIG. 8 is a schematic diagram of logical address areas of a virtual storage disk according to another embodiment of the present application;
FIG. 9 is a flow chart of IO request processing of a storage system according to an embodiment of the present application;
FIG. 10 is a flow diagram of distributing IO requests to IO processing threads in accordance with an embodiment of the present application; and
FIG. 11 is a flow chart of IO processing threads processing IO requests in accordance with an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, the embodiments being illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
Fig. 1 illustrates an architecture of a storage system according to an embodiment of the present application. A storage system according to the present application includes a computer or server (collectively referred to as a host) and a plurality of storage devices (e.g., drives) coupled to the host. Preferably, the drive is a Solid State Drive (SSD). Optionally, a disk drive may also be included in embodiments according to the present application.
The storage resources provided by the respective drives are maintained by a pool of storage resources. The storage resource pool records data blocks or chunks (Chunk, chunk for short) in each drive. By way of example, a chunk of data is a plurality of chunks of data that are contiguous in logical space or physical space in a drive of a predetermined size. The size of the large block of data may be, for example, hundreds of Kilobytes (KB) or Megabytes (MB). Alternatively, recorded in the storage resource are data blocks or chunks of data in the respective drives that have not been allocated to the storage object, which are also referred to as free data blocks or free data chunks. Storage resource pools are also a virtualization technique to virtualize storage resources from physical drives into blocks or chunks of data for upper layer access or use. In a storage system there may be multiple storage resource pools, whereas in the example of fig. 1 only a single storage resource pool is shown.
And allocating a storage object to the storage object layer at the resource allocation layer, wherein the storage object comprises a plurality of large blocks. The allocator allocates large blocks in the pool of storage resources to create storage objects. According to the storage object provided in the embodiment of the application, a part of storage space of the storage system is represented. The storage object is a storage unit with a RAID function, and the storage object structure will be described in detail later with reference to fig. 2. A plurality of storage objects is provided in a storage object layer. The storage object may be created and destroyed. When a storage object is created, a desired number of large blocks are obtained from the storage resource pool by the allocator, and the large blocks constitute the storage object. A chunk may belong to only one storage object at a time. The large blocks that have been allocated to a storage object are no longer allocated to other storage objects. When a storage object is destroyed, the chunks that make up the storage object are released back into the storage resource pool and may be reassigned to other storage objects.
The storage system includes a plurality of virtual storage disks. The virtual storage disk provides an access interface for the application program and provides services to the outside. The virtual storage disk is composed of a plurality of storage objects, and an application program can create a plurality of virtual storage disks with different attributes according to requirements. The virtual storage disk provides a logical address space that includes a plurality of logical address areas.
Fig. 2 shows a structure of a storage object according to an embodiment of the present application. The memory object includes a plurality of data blocks or chunks of data. In the example of FIG. 2, the storage objects include chunk 220, chunk 222, chunk 224, and chunk 226. The large blocks constituting the memory object come from different drives. Each drive provides at most one chunk to one storage object. Referring to FIG. 2, chunk 220 is from drive 210, chunk 222 is from drive 212, chunk 224 is from drive 214, and chunk 226 is from drive 216. Thus, when a single drive fails, one or a few large blocks of memory objects are inaccessible. Through other large blocks of the storage object, the data of the storage object can be reconstructed to meet the requirement of data reliability.
Data protection is provided for the storage object through RAID technology, and high-performance access of the storage object is provided. Referring to FIG. 2, the storage object includes a plurality of RAID stripes (stripe 230, stripe 232 … … stripe 238), each consisting of storage space from a different chunk. Memory from different chunks of the same stripe may have the same or different address ranges. The stripe is the minimum write unit of the storage object, thereby improving performance by writing data to multiple drives in parallel. The read operation of the memory object is not limited in size. RAID techniques are implemented in the stripe. Of the storage spaces from the 4 large blocks that make up stripe 230, 3 storage spaces are used to store user data, while the other 1 storage space is used to store parity data, such that data protection, such as a RAID 5 level, is provided on stripe 230.
Optionally, metadata is also stored in each chunk. In the example of fig. 2, the same metadata is stored on each of the chunks 220-226, so that the reliability of the metadata is ensured, and even if a part of the chunks belonging to the same storage object fails, the metadata can still be obtained from other chunks. The metadata is used for recording information such as a storage object to which the large block belongs, a data protection level (RAID level) of the storage object to which the large block belongs, and the erasing times of the large block.
Fig. 3 shows a structure of a storage object according to still another embodiment of the present application. The memory object is created for the logical address area when the user writes a certain logical address of the memory space for the first time, at which time a resource allocation occurs to create the memory object and the created memory object is associated with the logical address area of the memory space. The storage system includes a plurality of drives (see fig. 3, drive 0, drive 1, drive 2, and drive 3), and the storage space of the drives is divided into storage resources of a fixed size, which are called chunks. The large blocks are organized by RAID algorithm to form a data protection unit, called a storage object. Referring to fig. 3, drive 0 provides chunk 0, chunk i … …, drive 1 provides chunk 1, chunk j … …, drive 2 provides chunk 2, chunk k … …, drive 3 provides chunk 3, chunk t … … illustrates 2 storage objects in the embodiment shown in fig. 3, storage object m is formed by each of drive 0, drive 1, drive 2 provides 1 chunk (chunk 0, chunk 1, and chunk 2), and storage object n is formed by each of drive 1, drive 2, and drive 3 provides 1 chunk (chunk j, chunk k, and chunk t).
By way of example, in the embodiment of fig. 3, the storage capacity of a large block is on the MB (Megabyte) level, and the storage capacity of a drive is on the TB (Terabyte) level. Therefore, the creation of the storage object in the storage system frequently occurs, and the use of the storage resource can be effectively controlled and the global wear balance and the reverse balance can be realized by controlling the creation process of the storage object.
In the embodiment of the application, when the storage object is created, a large-block driver provided for the storage object can be randomly selected from a plurality of drivers based on the weight of the driver, so that balanced use of the driver resource is ensured, and global wear balance is realized.
FIG. 4 is a schematic diagram of a lock-free IO request processing model in accordance with an embodiment of the present application. As shown in fig. 4, the application accesses the virtual storage disk. Several IO processing threads (IO handers) are created for the storage resource pool according to the user's configuration, which can run completely concurrently on multiple CPUs. And for different IO requests from the virtual storage disk, distributing the IO requests to a specific IO processing thread for processing according to the parameters of the IO requests. By way of example, IO requests are distributed to different IO processing threads depending on the type of IO request (read request or write request) and/or the logical address of the IO request access.
The IO processing thread is responsible for processing IO requests, such as read-write requests of applications and/or data reconstruction requests within the storage system. The main work of the IO processing thread includes: mutual exclusion and synchronization between IO requests, RAID encoding and decoding, distributing IO requests to lower solid state drives, and fallback processing of return requests. The IO processing thread may be a thread, process, or other piece of code executing on the CPU. In the example of FIG. 4, each IO processing thread is bound to a CPU or CPU core. The CPU or the CPU core is special for executing the IO processing thread bound with the CPU or the CPU core, so that the additional cost caused by thread switching is reduced.
FIG. 5 is a schematic diagram of a mapping of logical address space of a virtual storage disk to storage objects according to an embodiment of the present application. In fig. 5, "ct#k" indicates a memory object with a number K. The logical address space of virtual storage disk 0 is from "LBA#0" to "LBA#N". "LBA#0" indicates a start address 0 of the logical address space, and "LBA#N" indicates a maximum address N of the logical address space. As shown in fig. 5, in the logical address space of the virtual disk 0, an unmapped logical address portion is indicated by shading, and a non-shaded portion is a logical address portion to which a memory object has been mapped. The logical address portion of the virtual storage disk is mapped to the storage object such that access to the logical address portion is carried by the mapped storage object. For example, in fig. 5, when accessing virtual disk 0 is mapped to the logical address portion of memory object ct#0, the access is completed by accessing memory object ct#0. Thus, initially, the logical addresses of virtual disk 0 are all unmapped areas. When data is written to a logical address for the first time, a memory object is allocated and the written logical address is mapped to the memory object. The logical address to which the memory object is mapped belongs to the mapped region. When a logical address is read, a memory object mapped on the logical address is searched, and data to be read is obtained by accessing the memory object. Further, each logical address portion is mapped to at most one memory object.
FIG. 6 is a schematic diagram of a mapping of logical address space of a virtual storage disk to storage objects according to another embodiment of the present application. In fig. 6, the memory objects have different sizes, and thus different memory objects are mapped to different numbers of logical addresses. In comparison, in fig. 5, the memory objects have the same size.
As shown in fig. 6, some logical addresses of the virtual storage disk 1 are mapped to storage objects of different sizes, while some logical addresses have not been mapped to storage objects. When a write operation is performed to a certain logical address of the virtual storage disk 1, it is first detected whether the logical address has been mapped to a storage object. If the logical address is mapped to a memory object (for example, the memory object Ct#2), directly performing a write operation on the memory object Ct#2; if no memory object is mapped to the logical address, a memory object is created and added to the address space map management system of the virtual disk 1, and then the write operation is continued.
Fig. 7 is a schematic diagram of a logical address area of a virtual storage disk according to an embodiment of the present application. As shown in fig. 7, the logical address space of the virtual disk 0 is divided into consecutive logical address areas of equal size. The logical address area numbered n is indicated by "re#n". In the embodiment of the application, when the IO request is processed, the IO request is distributed to different IO processing threads for processing according to the logic address area accessed by the IO request. In each logical address area, one or more memory objects may be corresponding. Logical addresses corresponding to the logical address areas are not overlapped with each other.
The plurality of logical address areas of the virtual storage disk are of a fixed size. The size of the storage object can be fixed-length or variable-length. Fig. 8 is a schematic diagram of logical address areas of a virtual storage disk according to another embodiment of the present application. As shown in fig. 8, the memory objects have different sizes, and the address space of the virtual memory disc is divided into a plurality of logical address areas with equal sizes, and each logical address area corresponds to one or more memory objects. For example, referring to fig. 8, the logical address area re#0 includes the memory objects ct#1 and ct#2, the logical address corresponding to the logical address area re#1 has not yet been allocated with the memory object, a part of the logical address corresponding to the logical address area re#3 has been allocated with the memory object ct#k, and another part of the logical address has not yet been allocated with the memory object.
Fig. 9 is a flowchart of IO request processing of a storage system according to an embodiment of the present application. By distributing IO requests accessing different logical address areas to different IO processing threads for processing, simultaneous processing of a plurality of threads is realized. Since each thread processes IO requests of different logical address areas, access correlation does not exist between the read-write requests, so that the threads have no influence on each other and can process in parallel.
Referring to FIG. 9, to process an IO request of a storage system, a first IO request is received, wherein the first IO request accesses a first logical address region (910). A first thread is determined based on the first logical address region, causing the first thread to process a first IO request (920).
As an example, the first logical address area is the logical address area indicated by "re#0" of fig. 7. The first thread may be an IO processing thread (T1) bound on CPU 1, such as shown in fig. 4.
For example: IO requests for region "Re#0" (see FIG. 7) are assigned to thread T1 processing, while IO requests for region "Re#2" (see FIG. 7) are assigned to thread T2 processing. Thread T2 is an IO processing thread bound to CPU2 (see fig. 4). So that read and write requests for different logical areas of the address space of the virtual storage disk can be handled simultaneously by different threads.
Those skilled in the art will appreciate that there are a variety of ways to map from the logical address region R to the T of the thread. For example, the number of the created IO processing threads is modulo by the number of the logical address area, the result is an index corresponding to the IO processing threads, and the IO processing threads indicated by the index process IO requests of the numbered logical address area. In another example, a hash is calculated over the number of logical address region R, with the resulting hash result being used as an index to the IO processing thread.
A storage object mapping table is provided to maintain a mapping of addresses of virtual storage disks to storage objects. A storage object mapping table is provided for each virtual storage disk, the storage object mapping table being a shared resource of the entire virtual storage disk, in embodiments according to the application, allowing multiple IO handers to access the storage object mapping table concurrently and without locks.
The storage object mapping table consists of a plurality of mapping table entries, each mapping table entry is a red-black tree, and the nodes of the red-black tree store the corresponding relation between the initial logical address of the storage object and the storage object. Thus, a plurality of "< storage object start logical address, storage object >" are recorded in each mapping table entry. Which mapping table entry the memory object is placed into can be obtained by modulo the number of mapping table entries from the logical address area number where the memory object is located.
In an embodiment according to the present application, an IO request to a logical address region R1 is processed by an IO processing thread T1; and a mapping table entry (referred to as M1) recording the mapping relationship between the logical address of the logical address region R1 and the storage object is also processed by the IO processing thread T1. Meanwhile, IO requests of the logic address region R2 are processed by the IO processing thread T2; and a mapping table entry (referred to as M2) recording the mapping relationship between the logical address of the logical address region R2 and the storage object is also processed by the IO processing thread T2. Thus, IO processing thread T1 and IO processing thread T2 process IO requests accessing different logical address regions and access different mapping table entries. In this way, multiple IO processing threads process respective tasks concurrently, nor does a lock need to be used to synchronize between the multiple IO processing threads.
As an example, for a storage resource pool of the storage system, io_handler_num IO processing threads are provided, and the storage object mapping table of the virtual storage disk 0 has map_entry_num mapping table entries. For the logical address area where the index number is region_index: the IO processing thread to which the logical address area belongs is indicated by the number ioh _index, and ioh _index=region_index% io_handler_num; the mapping table entry to which the logical address area belongs is indicated by an index map_entry, where map_index=region_index% map_entry_num; the mapping table entry to which the logical address area belongs is accessed by the IO processing thread numbered map_ ioh _index, where map_ ioh _index=map_index% io_handler_num.
Further, map_entry_num is divisible by io_handler_num to ensure that for an IO request its corresponding ioh _index is equal to map_ ioh _index.
Thus, for a virtual disk, the number of mapping table entries corresponding to the virtual disk is set to be an integer multiple of the number of IO processing threads allocated to the storage resource pool, so as to ensure that each IO processing thread is the only visitor to some mapping table entries. In this case, when a plurality of IO processing threads access the memory object map table, it is not necessary to synchronize operations with each other using locks, thereby realizing a lock-free design.
In another example, the logical address space of virtual storage disk 1 is divided into 12 regions, and a storage resource pool that provides resources for virtual storage disk 1 is served by 3 IO processing threads, and the storage object map of the virtual storage disk includes 6 map entries. The first 4 logical address regions (numbered 0/1/2/3) and the first 2 mapping table entries (numbered 0/1) are assigned to thread T0, the middle 4 logical address regions (numbered 4/5/6/7) and the mapping table entries (numbered 2/3) are assigned to IO processing thread T1, and the last 4 logical address regions and the mapping table entries (numbered 5/6) are assigned to IO processing thread T2. Thus avoiding two or more IO processing threads accessing the same mapping table entry and also avoiding two or more IO processing threads accessing the same logical address region.
One storage resource pool may support multiple virtual storage disks. In an embodiment according to the present application, IO processing threads are provided for a storage resource pool. IO processing threads for the same storage resource pool may be shared among multiple virtual storage disks supported by the storage resource pool. And enabling one IO processing thread to process IO requests for accessing different virtual storage disks. Further, multiple storage resource pools may be provided in the storage system.
Preferably, the number of IO processing threads provided for a storage system or storage resource pool may be adjusted, but not more than the number of CPUs or CPU cores in the storage system.
Processing of virtual disk IO requests can be divided into two phases: the first stage is to receive an IO request and distribute the IO request to a corresponding IO processing thread; the second stage is that the IO processing thread processes the IO request and sends the IO request to the storage device or a driver of the storage device. Each of which will be described below. FIG. 10 is a flow chart of distributing IO requests to IO processing threads, which corresponds to the first stage described above, in accordance with an embodiment of the present application; FIG. 11 is a flowchart of IO processing threads processing IO requests, corresponding to the second stage described above, according to an embodiment of the present application.
As shown in FIG. 10, a user accesses a virtual disk to receive an IO request (1010), and determines whether the IO request spans multiple logical address areas based on the logical address of the IO request (1020). Whether to span multiple logical address regions may be determined by the starting logical address and the access length of the IO request access. If the IO request accesses only one logical address region, an IO command is created (1030). The response to the IO command can be a response to the IO request. And calculating the number of the logic address area accessed by the IO command. For example, the number of the logical address area accessed by the IO request is determined according to the initial logical address accessed by the IO request. If the logical address range accessed by the IO request spans multiple logical address regions, multiple IO commands are created for the IO request, one IO command for each logical address region accessed by the IO request (1040). For example, the IO request accesses the logical address area re#2 and the logical address area re#3 (see fig. 7), generates the IO command C1 for the logical address area re#2, generates the IO command C2 for the logical address area, and the responses to the IO command C1 and the IO command C2 can be combined into a response to the IO request.
And determining an IO processing thread for processing the IO command according to the logical address area number corresponding to the IO command, and distributing the IO command to a command queue of the IO processing thread (1050). In an embodiment according to the present application, a command queue is provided for each IO processing thread. IO commands are accommodated in entries of the command queue. The IO processing thread takes IO commands out of its command queue for processing. In step 1050, the logical address area number corresponding to the IO command is hashed, or the number of the IO processing threads is modulo by the logical address area number, to obtain an index of the IO processing thread that processes the IO command, obtain the IO processing thread according to the index, and fill the IO command into a command queue of the IO processing thread. As another example, the logical address area number corresponding to the IO command is hashed, or the number of IO processing threads is modulo by the logical address area number, so as to obtain an index of the IO processing threads for processing the IO command, a command queue is directly obtained according to the index, and the IO command is inserted into the command queue.
And in the second stage, the IO processing thread processes the IO command and sends the IO command to the storage device or a driver of the storage device. As shown in FIG. 11, the IO processing thread fetches IO commands from its command queue (1110). According to the logical address of the IO command, a storage object mapping table is looked up to determine if a storage object corresponding to the logical address exists (1120). If the storage object mapping table indicates that the logical address accessed by the processed IO command has been mapped to a storage object, the storage object is accessed and read or written to the storage object in accordance with the IO command (1150). When the IO processing thread searches the storage object mapping table, the mapping table entry to be accessed is determined according to the logic address area number accessed by the IO command, and the storage object to which the logic address of the IO command is mapped is searched. In the embodiment of the application, two IO processing threads cannot access the same mapping table entry, so that when the IO processing threads access the storage object mapping table, the storage object mapping table does not need to be locked, and the processing efficiency is improved.
If the storage object mapping table indicates that the logical address accessed by the processed IO command has not been mapped to a storage object, a further determination is made as to whether the IO command is a read command or a write command (1130). If the IO command is a read command and reading the logical address to which the memory object has not been allocated is illegal, then the specified value is used as a response to the IO command (1160). For example, a result of all 0 s is returned for the IO command, or the return value indicates that the IO command has accessed an illegal or not yet assigned logical address. If the IO command is a write command, when writing a logical address of a storage object which has not been allocated, the storage object is first created, and the created storage object is inserted into the storage object mapping table, so as to record a mapping relationship between the logical address and the created storage object in an entry of the storage object mapping table (1140), and then the storage object is accessed, and data is written into the created storage object according to the IO command (1150).
It will be apparent to one skilled in the art that the steps shown in fig. 11 may be performed in other orders. For example, after the IO processing thread fetches the IO command from the command queue, it may first check whether the IO command is a read command or a write command, and then check whether the logical address of the IO command is assigned a storage object. For a read command, if the logic address of the IO command is allocated with a storage object, data is taken out of the storage object; if the logic address of the IO command is not allocated with the storage object, the preset value is used as a reading result or an illegal address is indicated to be read. For a write command, if a logical address of the IO command is allocated with a storage object, writing data into the storage object; if the logical address of the IO command does not allocate the storage object, allocating a new storage object, establishing a mapping relation between the logical address and the newly allocated storage object, and writing data into the newly allocated storage object as a response to the IO command.
In summary, in the embodiment of the present application, the logical address space of the virtual storage disk is divided into a plurality of logical address areas that do not overlap with each other, and the IO request for one logical address area is processed by a corresponding thread. Two threads are prevented from processing IO requests of the same logical address area. Further, two threads are prevented from accessing one mapping table entry at a time. Therefore, the resource access between the threads has no conflict, and a lock or other mechanisms are not needed to synchronize critical resources between the threads, so that the processing process is simplified, and the parallelism between the threads is improved.
For a storage system comprising a plurality of solid-state drives, the novel lock-free IO processing method provided by the application can ensure the reliability of data and simultaneously utilize the characteristics of concurrent processing of multiple CPU cores/multiple CPUs, so that the high performance of the plurality of solid-state drives can be fully exerted. And the performance can be linearly expandable relative to the number of CPU cores/CPU, thereby meeting the requirements of clients on data reliability and performance. And the dynamic configuration can be carried out according to the performance requirement and the use requirement of resources such as CPU, memory and the like.
It should be noted that embodiments of the present application cannot guarantee that there are no conflicts between multiple IO requests on the same stripe inside a storage object. Read requests and read requests of the same stripe inside the storage object can be executed concurrently; but the synchronization or serial execution is needed between the write request and the write request, and between the write request and the reconstruction request to ensure the correctness of the data. Synchronization between multiple IO requests within the same memory object is handled by each IO processing thread.
The present embodiments also provide a program comprising program code which, when loaded into and executed in a CPU, causes the CPU to perform one of the methods provided above according to the embodiments of the present application.
The present embodiments also provide a program comprising program code which, when loaded into and executed on a host, causes a processor of the host to perform one of the methods provided above according to the embodiments of the present application.
It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including computer program instructions.
These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data control apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data control apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data control apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data control apparatus to cause a series of operational operations to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of operations for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or operations, or combinations of special purpose hardware and computer instructions.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (15)

1. An IO request processing method of a storage system, wherein the storage system includes a plurality of virtual storage disks, the virtual storage disks include a plurality of logical address areas, and logical addresses of the plurality of logical address areas do not overlap with each other, the method comprising:
creating a storage object according to the logic address area, wherein the access to the logic address area is carried by the mapped storage object;
receiving a first IO request, wherein the first IO request accesses a first logic address area;
determining a first thread according to the first logic address area, and enabling the first thread to process the first IO request;
receiving a second IO request, wherein the second IO request accesses a second logical address area;
determining a second thread according to the second logic address area, and enabling the second thread to process the second IO request;
wherein a storage object mapping table is provided to maintain a mapping of logical addresses of virtual storage disks to storage objects.
2. The method of claim 1, wherein determining a first thread from a first logical address region, causing the first thread to process the first IO request comprises:
and generating a first IO command according to the first IO request, filling the first IO command into a first queue corresponding to the first thread according to the first logic address area, and taking out and processing the first command from the first queue by the first thread, wherein the first thread only processes the command of the first queue.
3. The method of claim 2, wherein determining a second thread from a second logical address region, causing the second thread to process the second IO request comprises:
and generating a second IO command according to the second IO request, filling the second IO command into a second queue corresponding to the second thread according to the second logic address area, and taking out and processing the second command from the second queue by the second thread, wherein the second thread only processes the command of the second queue.
4. A method according to any one of claims 1 to 3, wherein
Taking the result of modulo the number of threads by the index of the first logical address area as the index of the first thread; or the result of modulo the index of the second logical address area to the number of threads is the index of the second thread.
5. A method according to any one of claims 1 to 3, wherein
Calculating hash on the index of the first logic address area to obtain the index of the first thread; or calculating hash on the index of the second logic address area to obtain the index of the second thread.
6. A method according to any one of claims 1 to 3, wherein
Taking the result of modulo the index of the first logic address area to the number of threads as the index of the first queue; or the result of modulo the index of the second logical address area to the number of threads is the index of the second queue.
7. A method according to any one of claims 1 to 3, wherein
Calculating hash on the index of the first logic address area to obtain the index of the first queue; or calculating hash on the index of the second logic address area to obtain the index of the second queue.
8. A method according to any one of claims 1-3, further comprising:
determining a first mapping table entry according to the first logical address area;
the first thread accesses a first mapping table entry, obtains a first storage object from the first mapping table entry according to a first logic address accessed by a first IO request, and accesses the first storage object to process the first IO request.
9. The method of claim 8, further comprising:
determining a second mapping table entry according to the second logical address area;
and the second thread accesses a second mapping table entry, acquires a second storage object from the second mapping table entry according to a second logic address accessed by a second IO request, and accesses the second storage object to process the second IO request.
10. The method of claim 9, wherein
The result of modulo the index of the first logic address area to the number of the mapping table entries is the index of the first mapping table entry; or calculating hash on the index of the first logic address area to obtain the index of the first mapping table item.
11. The method of claim 10, wherein
If the storage object is not available from the first mapping table entry, for the write request in the IO request, creating a third storage object, and recording the third storage object in the first mapping table entry, an
Writing data to the third storage object.
12. The method of claim 9, wherein
If the storage object is not available from the second mapping table entry, for the write request in the IO request, creating a fourth storage object, and recording the fourth storage object in the second mapping table entry, an
Writing data to the fourth storage object.
13. An IO request processing method of a storage system, wherein the storage system includes a plurality of virtual storage disks, the virtual storage disks including a plurality of logical address areas, the method comprising:
creating a storage object according to the logic address area, wherein the access to the logic address area is carried by the mapped storage object;
receiving a first IO request;
generating a first IO command and a second IO command according to the first IO request, wherein the first IO command accesses the first logic address area, and the second IO command accesses the second logic address area;
determining a first thread according to the first logic address area, and enabling the first thread to process the first IO command; determining a second thread according to the second logic address area, and enabling the second thread to process the second IO command;
for a write request in the IO request, creating a third storage object, and writing data into the third storage object;
wherein a storage object mapping table is provided to maintain a mapping of logical addresses of virtual storage disks to storage objects.
14. The method of claim 13, further comprising:
filling a first IO command into a first queue corresponding to a first thread according to a first logic address area, and taking out the first command from the first queue and processing the first command by the first thread, wherein the first thread only processes the command of the first queue; and
And filling the second IO command into a second queue corresponding to the second thread according to the second logic address area, and taking out and processing the second command from the second queue by the second thread, wherein the second thread only processes the command of the second queue.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to one of claims 1-14.
CN202010643795.3A 2016-08-08 2016-08-08 Lock-free concurrent IO processing method and device Active CN111679795B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010643795.3A CN111679795B (en) 2016-08-08 2016-08-08 Lock-free concurrent IO processing method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010643795.3A CN111679795B (en) 2016-08-08 2016-08-08 Lock-free concurrent IO processing method and device
CN201610644949.4A CN107704194B (en) 2016-08-08 2016-08-08 Lock-free IO processing method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610644949.4A Division CN107704194B (en) 2016-08-08 2016-08-08 Lock-free IO processing method and device

Publications (2)

Publication Number Publication Date
CN111679795A CN111679795A (en) 2020-09-18
CN111679795B true CN111679795B (en) 2024-04-05

Family

ID=61161771

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201610644949.4A Active CN107704194B (en) 2016-08-08 2016-08-08 Lock-free IO processing method and device
CN202010643795.3A Active CN111679795B (en) 2016-08-08 2016-08-08 Lock-free concurrent IO processing method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201610644949.4A Active CN107704194B (en) 2016-08-08 2016-08-08 Lock-free IO processing method and device

Country Status (2)

Country Link
CN (2) CN107704194B (en)
WO (1) WO2018028529A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110568991B (en) * 2018-06-06 2023-07-25 北京忆恒创源科技股份有限公司 Method and storage device for reducing IO command conflict caused by lock
CN109101194A (en) * 2018-07-26 2018-12-28 郑州云海信息技术有限公司 One kind writing with a brush dipped in Chinese ink performance optimization method and storage system
CN108958944A (en) * 2018-07-26 2018-12-07 郑州云海信息技术有限公司 A kind of multiple core processing system and its method for allocating tasks
US10949204B2 (en) * 2019-06-20 2021-03-16 Microchip Technology Incorporated Microcontroller with configurable logic peripheral
CN111638854A (en) * 2020-05-26 2020-09-08 北京同有飞骥科技股份有限公司 Performance optimization method and device for NAS construction and SAN stack block equipment
CN112306413B (en) * 2020-10-30 2024-05-07 北京百度网讯科技有限公司 Method, device, equipment and storage medium for accessing memory
CN112463037B (en) * 2020-11-13 2022-08-12 苏州浪潮智能科技有限公司 Metadata storage method, device, equipment and product
CN112463306A (en) * 2020-12-03 2021-03-09 南京机敏软件科技有限公司 Method for sharing disk data consistency in virtual machine
CN113568736A (en) * 2021-06-24 2021-10-29 阿里巴巴新加坡控股有限公司 Data processing method and device
CN113849317B (en) * 2021-11-29 2022-03-22 苏州浪潮智能科技有限公司 Memory pool resource using method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102317912A (en) * 2009-02-17 2012-01-11 松下电器产业株式会社 Multi-thread processor and digital TV system
CN102405460A (en) * 2009-02-11 2012-04-04 艾梵尼达有限公司 Virtualized storage system and method of operating it
CN102622189A (en) * 2011-12-31 2012-08-01 成都市华为赛门铁克科技有限公司 Storage virtualization device, data storage method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937465B (en) * 2010-09-10 2013-09-11 中兴通讯股份有限公司 Access method of distributed file system and upper file system thereof
CN101937466B (en) * 2010-09-15 2011-11-30 任子行网络技术股份有限公司 Webpage mailbox identification classifying method and system
CN102073461B (en) * 2010-12-07 2012-07-04 成都市华为赛门铁克科技有限公司 Input-output request scheduling method, memory controller and memory array
CN102298561B (en) * 2011-08-10 2016-04-27 北京百度网讯科技有限公司 A kind of mthods, systems and devices memory device being carried out to multi-channel data process
CN102637147A (en) * 2011-11-14 2012-08-15 天津神舟通用数据技术有限公司 Storage system using solid state disk as computer write cache and corresponding management scheduling method
US9052937B2 (en) * 2013-02-27 2015-06-09 Vmware, Inc. Managing storage commands according to input-output priorities and dependencies

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102405460A (en) * 2009-02-11 2012-04-04 艾梵尼达有限公司 Virtualized storage system and method of operating it
CN102317912A (en) * 2009-02-17 2012-01-11 松下电器产业株式会社 Multi-thread processor and digital TV system
CN102622189A (en) * 2011-12-31 2012-08-01 成都市华为赛门铁克科技有限公司 Storage virtualization device, data storage method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Practical lock/unlock pairing for concurrent programs;Hyoun Kyu Cho;IEEE;全文 *
Windows系统访问基于LVM存储设备的方法研究;李法春;刘勇;;韶关学院学报(第06期);全文 *

Also Published As

Publication number Publication date
CN111679795A (en) 2020-09-18
CN107704194A (en) 2018-02-16
WO2018028529A1 (en) 2018-02-15
CN107704194B (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111679795B (en) Lock-free concurrent IO processing method and device
US20220137849A1 (en) Fragment Management Method and Fragment Management Apparatus
US9229826B2 (en) Volatile memory representation of nonvolatile storage device set
US9378093B2 (en) Controlling data storage in an array of storage devices
US11023147B2 (en) Mapping storage extents into resiliency groups
US20040064641A1 (en) Storage device with I/O counter for partial data reallocation
US20060085626A1 (en) Updating system configuration information
CN114860163B (en) Storage system, memory management method and management node
TW201520793A (en) Memory system with shared file system
US8862819B2 (en) Log structure array
US10579540B2 (en) Raid data migration through stripe swapping
US10908997B1 (en) Simple and efficient technique to support disk extents of different sizes for mapped RAID
US11157198B2 (en) Generating merge-friendly sequential IO patterns in shared logger page descriptor tiers
CN109727629B (en) Method and system for wear leveling using multiple gap progress fields
US11216195B1 (en) Sharing blocks of non-volatile data storage to support cache flushes in a multi-node data storage system
US20210318992A1 (en) Supporting storage using a multi-writer log-structured file system
CN115793957A (en) Method and device for writing data and computer storage medium
US11144445B1 (en) Use of compression domains that are more granular than storage allocation units
US11947803B2 (en) Effective utilization of different drive capacities
CN109558236B (en) Method for accessing stripes and storage system thereof
CN109558070B (en) Scalable storage system architecture
US11604591B2 (en) Associating data types with stream identifiers for mapping onto sequentially-written memory devices
EP4120087B1 (en) Systems, methods, and devices for utilization aware memory allocation
WO2015161140A1 (en) System and method for fault-tolerant block data storage
CN117591005A (en) Open block management in memory devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room A302, B-2 Building, North Territory of Dongsheng Science Park, Zhongguancun, 66 Xixiaokou Road, Haidian District, Beijing, 100192

Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd.

Address before: Room A302, B-2 Building, North Territory of Dongsheng Science Park, Zhongguancun, 66 Xixiaokou Road, Haidian District, Beijing, 100192

Applicant before: BEIJING MEMBLAZE TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant