WO2019047834A1 - 用于传输数据处理请求的方法和装置 - Google Patents

用于传输数据处理请求的方法和装置 Download PDF

Info

Publication number
WO2019047834A1
WO2019047834A1 PCT/CN2018/104054 CN2018104054W WO2019047834A1 WO 2019047834 A1 WO2019047834 A1 WO 2019047834A1 CN 2018104054 W CN2018104054 W CN 2018104054W WO 2019047834 A1 WO2019047834 A1 WO 2019047834A1
Authority
WO
WIPO (PCT)
Prior art keywords
data processing
processing request
type
jbof
storage controller
Prior art date
Application number
PCT/CN2018/104054
Other languages
English (en)
French (fr)
Inventor
李晓初
晏大洪
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18852841.8A priority Critical patent/EP3660686B1/en
Priority to EP22153995.0A priority patent/EP4071620A1/en
Publication of WO2019047834A1 publication Critical patent/WO2019047834A1/zh
Priority to US16/808,968 priority patent/US11169743B2/en
Priority to US17/508,443 priority patent/US20220050636A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays

Definitions

  • the present application relates to the field of communications and, more particularly, to a method and apparatus for transmitting data processing requests.
  • an All Flash Array is usually composed of a storage controller (Storage Controller) and a solid state drive (SSD) disk set (Just F of SSD), where a storage controller is used.
  • the data processing request sent by the host may include an IO request, an Erasure Code (EC) request, a Garbage Collecting (GC) request, and the like.
  • the storage controller may process the data to be processed according to the data processing request, and store the processed data in the JBOF, or send the data read out from the JBOF to the host as the processed data.
  • the SSD in JBOF has a higher number of read/write operations per second (Input/Output Operations Per Second, IOPS), and the IOPS and read/write performance per gigabyte (GB) is comparable to that of a traditional hard disk drive (Hard Disk Drive). , HDD) has several times the increase in IOPS and read and write performance.
  • IOPS Input/Output Operations Per Second
  • GB gigabyte
  • HDD Hard Disk Drive
  • the present application provides a method and apparatus for transmitting a data processing request to increase the speed at which the memory controller executes a data processing request, which is advantageous for reducing the latency of the memory controller to execute a data processing request.
  • a method for transmitting a data processing request comprising: a solid state disk cluster JBOF acquiring a data processing request sent by a storage controller, the data processing request being used to access a target solid state hard disk SSD in the JBOF
  • the JBOF determines the type of the data processing request, the type of the data processing request includes a pass-through type and a background computing type; if the type of the data processing request is a straight-through type, the JBOF directly goes to the target SSD Forwarding the data processing request; if the type of the data processing request is a background computing type, the JBOF sends the data processing request to a computing unit in the JBOF, and processes the data processed by the computing unit A request is sent to the target SSD.
  • the data processing request is divided into a through data processing request and a background computing type data processing request, wherein the computing resource occupied by the data processing mode indicated by the background computing type data processing request may no longer be
  • the CPU in the storage controller is provided by the computing unit in the JBOF, and to some extent, releases the computing resources of the CPU in the storage controller to execute the background computing type data processing request, so that the CPU in the storage controller can
  • more processing of the through-type data processing request is beneficial to improve the speed of the data processing request by the through-type memory controller, and reduce the delay of the memory controller to execute the straight-through data processing request.
  • the data processing request is used to indicate the processing manner of the data carried in the data processing request or the data stored in the SSD, and the specific processing manner may include reading and writing data, EC operations, GC operations, and the like.
  • the foregoing SSD may be an NVMe SSD, or may be a SATA SSD, which is not specifically limited in this embodiment of the present application.
  • the foregoing data processing request may be a request encapsulated based on an interface protocol, for example, may be an NVMe command encapsulated based on an NVMe over fabric (NVMeof) protocol transmitted on a network.
  • NVMeof NVMe over fabric
  • the JBOF determines the type of the data processing request, including: if the data processing request is a straight-through submission from the storage controller Queued, the JBOF determines that the type of the data processing request is a pass-through type; if the data processing request is from a background computing-type submit queue of the storage controller, the JBOF determines the data processing The type of request is background computing.
  • the overhead of transmitting the data processing request is reduced to some extent relative to the scheme of carrying the type in the direct data processing request.
  • the JBOF directly forwards the data processing request to the target SSD, including The JBOF extracts the data processing request from a pass-through submission queue in the JBOF, the type of the data processing request is a pass-through type; the JBOF directly forwards the data processing request to the target SSD; If the type of the data processing request is a background computing type, the JBOF sends the data processing request to a computing unit in the JBOF, and sends the data processing request processed by the computing unit to the a target SSD, comprising: the JBOF extracting the data processing request from a background computing type submission queue in the JBOF, the type of the data processing request is a background computing type; and the JBOF is to a computing unit in the JBOF Transmitting the data processing request and transmitting the data processing request processed by the computing unit to the target SSD.
  • the determining, by the JBOF, the type of the data processing request includes: determining, by the JBOF, the data if the data processing request is a write request The type of the processing request is a background computing type; if the data processing request is a read request, the JBOF determines that the type of the data processing request is a pass-through type.
  • the command queue identifier driver may not be changed in the storage controller.
  • a second aspect provides a method for transmitting a data processing request, comprising: a storage controller receiving a data processing request, the data processing request for accessing a target solid state hard disk in a solid state hard disk cluster JBOF controlled by the storage controller SSD; the storage controller determines a type of the data processing request, the type of the data processing request includes a pass-through type and a background computing type; if the type of the data processing request is a straight-through type, the storage controller pair The data processing request is processed, and the processed data processing request is placed in a pass-through submission queue of the storage controller; if the type of the data processing request is a background computing type, the storage controller The data processing request is placed in a background computing type commit queue of the storage controller.
  • the data processing request is divided into a through data processing request and a background computing type data processing request, wherein the computing resource occupied by the data processing mode indicated by the background computing type data processing request may no longer be
  • the CPU in the storage controller is provided by the computing unit in the JBOF, and to some extent, releases the computing resources of the CPU in the storage controller to execute the background computing type data processing request, so that the CPU in the storage controller can
  • more processing of the through-type data processing request is beneficial to improve the speed of the data processing request by the through-type memory controller, and reduce the delay of the memory controller to execute the straight-through data processing request.
  • the foregoing data processing request may be a request encapsulated based on an interface protocol, for example, an NVMe command encapsulated based on an NVMe over fabric (NVMeof) protocol transmitted over a network.
  • an interface protocol for example, an NVMe command encapsulated based on an NVMe over fabric (NVMeof) protocol transmitted over a network.
  • NVMeof NVMe over fabric
  • the obtaining the data processing request may include extracting a data processing request from a submission queue shared by the storage controller and the host.
  • the data processing request is a write request
  • the type of the data processing request is a background computing type
  • the data processing request is a read request
  • the type of the data processing request is a pass-through type
  • the command queue identifier driver may not be changed in the storage controller.
  • an apparatus for transmitting a data processing request comprising means for performing the first aspect or each of the possible implementations of any of the first aspects.
  • an apparatus for transmitting a data processing request comprising means for performing the second aspect or each of the possible implementations of any of the possible implementations of the second aspect.
  • an apparatus for transmitting a data processing request including a transceiver, a processor, and a memory.
  • the processor is for controlling transceiver transceiver signals for storing a computer program for calling and running the computer program from the memory such that the terminal device performs the method of the first aspect above.
  • an apparatus for transmitting a data processing request including a transceiver, a processor, and a memory.
  • the processor is for controlling transceiver transceiver signals for storing a computer program for calling and running the computer program from memory such that the network device performs the method of the second aspect.
  • a communication device may be a device for transmitting a data processing request in the above method design, or a chip disposed in a device for transmitting a data processing request.
  • the communication device includes a memory for storing computer executable program code, a communication interface, and a processor coupled to the memory and the communication interface.
  • the program code stored in the memory includes instructions that, when executed by the processor, cause the communication device to perform the methods of the various aspects described above.
  • a storage system comprising a storage device and a storage controller, the storage device comprising the device of the third aspect, the storage controller comprising the device of the fourth aspect.
  • a computer program product comprising: computer program code, when the computer program code is run on a computer, causing the computer to perform the method of the above aspects.
  • a computer readable medium storing program code for causing a computer to perform the method of the above aspects when the computer program code is run on a computer.
  • FIG. 1 is a schematic structural diagram of an all-flash array according to an embodiment of the present application.
  • FIG. 2 is a schematic flow chart of a method for writing data to an JBOF by an AFA based storage system.
  • FIG. 3 is a schematic flow chart of a method for reading data from a JBOF by an AFA based storage system.
  • FIG. 4 is a schematic block diagram of an AFA-based storage system according to an embodiment of the present application.
  • FIG. 5 is a schematic block diagram of an AFA-based storage system based on a hyper-convergence technology according to another embodiment of the present application.
  • FIG. 6 is a schematic block diagram of an AFA-based storage system according to another embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a method for transmitting a data processing request according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a method for transmitting a data processing request according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for transmitting an NVMe command according to an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a method for transmitting an NVMe command according to an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of an apparatus for transmitting a data processing request according to an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of an apparatus for transmitting a data processing request according to another embodiment of the present application.
  • FIG. 13 is a schematic block diagram of an apparatus for transmitting a data processing request according to an embodiment of the present application.
  • FIG. 14 is a schematic block diagram of an apparatus for transmitting a data processing request according to another embodiment of the present application.
  • FIG. 1 is a schematic diagram of an all-flash array based storage system.
  • at least one host 110, at least one storage controller 120, switch 130, and at least one JBOF 140 are included in the storage system.
  • Each of the at least one host may be connected to the at least one storage controller, and the at least one storage controller may communicate with any of the at least one JBOF through the switch.
  • the storage controller may perform the storage space of the JBOF. Access, or process data in JBOF's storage space.
  • Host You can apply for any JBOF in at least one of the above JBOFs as the target. That is, the host can access the target by sending a data processing request to read data from the target. Write data to the storage space on the target side.
  • the above data processing request may be a management command (Admin Command) or an IO request.
  • the host can control the target end through the management command, and the host can also access the storage space in the target end through the IO request.
  • the target end may be a Non-Volatile Memory Express (NVMe) SSD
  • the host may control the NVMe SSD through the NVMe command (NVMe Command), and may also access the IO request encapsulated by the NVMe command.
  • NVMe SSD Non-Volatile Memory Express
  • Storage Controller Also known as a Storage Processor Controller (SPC), it is used to receive data processing requests sent by the host, and process data in the storage space of the target end according to the data processing request. Or reading data from the storage space of the target end, or writing data in the data processing request to the target end.
  • the storage controller includes at least one CPU (eg, a high performance CPU of an X86 architecture) and at least one cache, wherein the CPU is configured to perform a calculation on the data processing request, and the buffer can be used to cache the data processing request (eg, a write request) The data.
  • the cache may be a Power Backup Memory Buffer (PBMB) or a Non-volatile Memory (NVM).
  • PBMB Power Backup Memory Buffer
  • NVM Non-volatile Memory
  • Switch Used to forward data processing requests from the storage controller to JBOF, or to aggregate data carried in data processing requests and forward them to JBOF.
  • switches may be different types of switches with forwarding and sharing capabilities in different types of network architectures, for example, Ethernet switches, InfiniBand (IB) switches, and high-speed serial computer expansion bus standards.
  • PCIe Peripheral Component Interconnect Express
  • JBOF A storage device with multiple SSDs installed on a backplane. Logically connect multiple physical SSDs one after another to provide a large storage space for data storage.
  • FIG. 2 is a schematic flow chart of a method for writing data to an JBOF by an AFA based storage system.
  • the method shown in Figure 2 includes:
  • the host sends a write request to the storage controller.
  • the implementation of the step 210 may be that the host stores the write request encapsulated as the NVMe command in a submit queue in the storage controller by means of Remote Direct Memory Access (RDMA) to enable storage control.
  • RDMA Remote Direct Memory Access
  • a write request can be extracted from the commit queue.
  • the storage controller decapsulates the write request and caches the write request in the local PBMB.
  • the storage controller returns an Acknowledgement (ACK) to the host to indicate that the operation of the write request is completed.
  • ACK Acknowledgement
  • the storage controller may encapsulate the ACK into an NVMe command, and store the encapsulated command in a completion queue of the storage controller, so that the host obtains an ACK from the completion queue to determine the success of the write request.
  • the host may consider that the operation flow of the write request ends, and the storage controller performs data processing on the data in the write request, and the processing is performed. Subsequent operations such as storing data in the JBOF storage space are invisible to the host, that is, the host does not care about the subsequent operations of the storage controller after returning the ACK.
  • the storage controller writes the data in the foregoing write request into the JBOF through the switch.
  • FIG. 3 is a schematic flow chart of a method for reading data from a JBOF by an AFA based storage system.
  • the method shown in Figure 3 includes:
  • the host sends a read request to the storage controller.
  • the host stores the read request encapsulated as the NVMe command in the submit queue of the host, and stores the read request in the RUI manner into the commit queue of the storage controller, so that the storage controller submits the queue from the storage controller. Extract the read request.
  • the storage controller decapsulates the read request.
  • the storage controller decapsulates the read request extracted from the commit queue to generate a read request that can be directly processed by the SSD in the JBOF.
  • the storage controller sends the decapsulated read request to the JBOF through the switch, and reads the data to be read from the JBOF.
  • the storage controller returns the data to be read to the host.
  • the storage controller Even if the IOPS performance of the SSD in JBOF is high enough to meet the current customer capacity requirements, the storage controller The processor in the middle cannot provide enough computing resources for a large number of data processing requests at the same time, which limits the number of IO requests that can be sent to JBOF at the same time, and may not satisfy the number of IO requests that the SSD can process per second, that is, At present, the computing power of the processor in the storage controller can not only meet the capacity requirements of the customer, but also limits the performance of the SSD in the JBOF to a certain extent.
  • the host can obtain the storage controller to return.
  • the IO write succeeds.
  • the write process can be understood as the end for the host, and the subsequent storage controller writes the data to the JBOF process, and the host does not care.
  • the read request is sensitive to delay and is a delay-sensitive data processing request. In the process of writing data, the host needs to wait for a short time.
  • the write request can be understood as a delay. Insensitive data processing request.
  • the embodiment of the present application is based on the requirement of the data processing request for the transmission delay, and divides the data processing request into a delay-sensitive data processing request and a delay-insensitive data processing request, and utilizes
  • the foregoing two types of data processing requests provide a method and apparatus for transmitting a data processing request, wherein the delay-sensitive data processing request may be a data processing request with a high transmission delay requirement, and the delay The insensitive data processing request may be a data processing request that requires less transmission delay.
  • FIG. 4 is a schematic block diagram of an AFA-based storage system in an embodiment of the present application.
  • the storage system of the AFA shown in FIG. 4 may be an improved architecture based on the storage system of the AFA shown in FIG. 1, and mainly improve the structure of the storage controller and the JBOF in the storage system of the AFA, in order to Concise, the following mainly introduces the storage controller and JBOF.
  • the storage system related units of other AFAs can be referred to the description above.
  • the storage system of the AFA shown in FIG. 4 includes at least one storage controller 410, a switch 420, and at least one JBOF 430.
  • the storage controller includes a command queue identifier driver (Initiator Driver) unit and block device management software, and each of the at least one JBOF includes a Dispatch Engine, a first processing unit, and a second processing unit.
  • command queue identifier driver Intelligent Driver
  • a command queue identifier driver (Initiator Driver) unit in the storage controller configured to create a storage control submit queue (SQ) for transmitting data processing requests from the storage controller to the JBOF
  • the commit Queues can include multiple types, such as delay sensitive and latency insensitive. Different types of commit queues are used to store different types of data processing requests. For example, a delay-insensitive commit queue is used to store latency-insensitive data processing requests, and a delay-sensitive commit queue is used to store latency. Sensitive data processing requests.
  • the above command queue identifier driving unit is further configured to determine whether the type of the data processing request is delay sensitive or delay insensitive.
  • the submission queue corresponding to the storage controller's submission queue created by the JBOF in the JBOF is required, that is, the storage controller.
  • the commit queue in and the commit queue in JBOF logically form a commit queue for transferring data processing requests from the storage controller to the JBOF.
  • the submission queue in the storage controller can occupy the storage resources in the memory of the storage controller, and the submission queue in the JBOF can occupy the storage space in the cache in the JBOF.
  • JBOF can also contain the unit that creates the submission queue, and create different types of submission queues in the same way as the storage controller creates the submission queue.
  • the above command queue driving unit may also be used to establish a Completion Queue (CQ) for storing feedback results for completed data processing requests.
  • CQ Completion Queue
  • the above command queue driver unit may also create different types of completion queues for feedback results for different types of data processing requests. For example, a delay-insensitive completion queue is used to store feedback for latency-insensitive data processing requests, and a delay-sensitive completion queue is used for feedback on storage latency-sensitive data processing requests.
  • the storage space management software in the storage controller is used to convert data processing requests received from the host into data processing requests that the JBOF can process directly.
  • the storage space management software described above may be block device management software
  • the block device management software may convert the storage address in the data processing request received from the host into a storage address including the storage block so that the JBOF can directly process it.
  • storage space management software may be a block device management software or a character device management software, which is not specifically limited in this embodiment of the present application.
  • a shunt engine in JBOF for sending different types of data processing requests to a processing unit for processing different types of data processing requests according to the type of data processing request determined by the command queue identification driving unit, that is, delay-sensitive
  • the data processing request is sent to the first processing unit, and the delay insensitive data processing request is sent to the second processing unit.
  • the above-mentioned offload engine further includes a port connected to the switch, for example, an Ethernet network interface supporting RDMA, for receiving a data processing request sent by the command queue identifier driving unit through the switch.
  • a port connected to the switch, for example, an Ethernet network interface supporting RDMA, for receiving a data processing request sent by the command queue identifier driving unit through the switch.
  • the first processing unit is configured to process the delay-sensitive data processing request and/or the hardware offload type data processing request.
  • the hardware offload type data processing request may be understood as a data processing request that does not need to be processed by hardware in the storage controller, and the hardware processing required for the data processing request may be implemented by hardware in the first processing unit. That is, the hardware offload type data processing request described above may be a data processing request to uninstall a hardware processing process in the storage controller.
  • the first processing unit may directly forward the data processing request to the JBOF; if the data processing request is a hardware offload type data processing request, the first processing unit may utilize its own Processing performance, the above delay-sensitive data processing request is converted into a data processing request that JBOF can directly process.
  • the first processing unit is an FPGA
  • the delay-sensitive data processing request can be converted into a data processing request that the JBOF can directly process, and then the result is returned to the storage, by utilizing the advantages of low latency and hardware processing of the FPGA. Controller.
  • the delay-sensitive data processing request may also belong to a data processing request that the JBOF can directly process.
  • the background unloading data processing request may also belong to a data processing request that the JBOF cannot directly process.
  • the first processing unit may be implemented by a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC) designed for a specific purpose.
  • FPGA Field-Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • the first processing unit may directly forward the delay-sensitive data processing request to the JBOF. That is, the first processing unit can transparently transmit the delay-sensitive data processing request to the JBOF, and the delay-sensitive data processing request is executed by the JBOF.
  • the first processing unit may be integrated with the offloading engine in one physical device, and may also be separately disposed in two different physical devices, which is not specifically limited in this embodiment of the present application.
  • the second processing unit is configured to process the data processing request of the delay insensitive type, and process the data carried in the data processing request according to the data processing manner indicated by the data processing request, or the data stored in the storage space of the JBOF
  • the data processing is performed in accordance with the data processing method indicated by the data processing request. For example, EC calculation is performed on data carried in a data processing request, or GC operation is performed on a storage space in JBOF.
  • the second processing unit may be composed of at least one inexpensive CPU (or low performance CPU), for example, may be an Advanced Reduced Instruction Set Computer Machines (ARM) core or no internal interlocking pipeline level. Microprocessor without interlocked piped stages (MIPS) core.
  • ARM Advanced Reduced Instruction Set Computer Machines
  • MIPS Microprocessor without interlocked piped stages
  • the JBOF in FIG. 4 compares the JBOF in FIG. 3 with the addition of a data processing request based on the type of the data processing request, and the calculation process required to execute the delay insensitive data processing request.
  • Such functions are more intelligent than the functions of JBOF in Figure 3, so the JBOF in Figure 4 is also known as Intelligent Bunch of Flash (JBOF). That is to say, the JBOF having the functions of offloading the data processing request based on the type of the data processing request and the calculation process required to execute the delay insensitive data processing request may be referred to as iBOF.
  • each functional unit in the above JBOF may be integrated in a system on chip (SoC), which may include a CPU, may be used as a second processing unit, and may also include an FPGA or an ASIC. As a shunt engine and a first processing unit.
  • SoC system on chip
  • the above JBOF may also be implemented by separate hardware, that is, the CPU used as the second processing unit and the FPGA or ASIC used as the shunt engine and the first processing unit are two separate hardware.
  • FIG. 1 and FIG. 4 only show one possible AFA storage system, and the embodiment of the present application can also be applied to other AFA storage systems. It should be understood that for the sake of brevity, the functions of the various units in the following AFA storage systems can be found in the description above.
  • FIG. 5 is a schematic block diagram of a storage system of an AFA based on hyper-fusion technology according to another embodiment of the present application.
  • HCI Hyper-Converged Infrastructure
  • the HCI host 510 can be understood as a hybrid of Host and SPC, that is, a new host formed by the Host being deployed together with the SPC in a hyper-converged manner. It should be understood that the HCI host 510 only changes the deployment mode between the Host and the SPC, but can still implement the various functions that the Host and SPC described above can be implemented.
  • JBOF520 the intelligent JBOF mentioned above, can be connected to the HCI host through a switch.
  • the foregoing JBOF and the HCI host can be directly connected through a communication line (for example, a bus), and the connection between the JBOF and the HCI host is not limited. .
  • FIG. 6 is a schematic block diagram of a storage system of an AFA according to another embodiment of the present application.
  • the storage system of the AFA shown in FIG. 6 includes at least one host 610 and at least one JBOF 620.
  • the host 610 including the various functional modules in the storage controller, such as the command queue identification software, can implement the functions of the storage controller in addition to the functions of the host.
  • the host and JBOF can be directly connected through a communication line (for example, a bus) without being connected via a switch.
  • the foregoing JBOF and the host may be connected through a switch.
  • the specific connection manner between the JBOF and the HCI host is not limited in this embodiment.
  • the data processing request is divided into a delay-sensitive data processing request and a delay-insensitive data processing request only from the perspective of the transmission delay of the data processing request, and the embodiment of the present application may also be based on the JBOF pair.
  • the data processing request is processed into a straight-through type and a background computing type, wherein the straight-through type data processing request can be understood as a data processing request that does not need to be processed by the JBOF computing unit, and the background computing type data processing
  • the request is a data processing request that needs to be processed by the computing unit of the JBOF.
  • the delay sensitive data processing request and the hardware offload type data processing request in the above may belong to a through data processing request, and the delay insensitive data processing request in the above may belong to a background computing type data processing request.
  • the first processing unit may be referred to as a hardware bypass engine (Hardware Bypass Engine), and the second processing unit may be referred to as a background software processor.
  • the method for classifying the data processing request in the manner in which the data processing request is processed by the JBOF is taken as an example.
  • the method for transmitting the data processing request in the embodiment of the present application is introduced in combination with the storage system of any of the above AFAs.
  • FIG. 7 is a schematic flowchart of a method for transmitting a data processing request according to an embodiment of the present application. The method shown in Figure 7 includes:
  • the solid state drive cluster JBOF acquires a data processing request sent by the storage controller, where the data processing request is used to access a target solid state hard disk SSD in the JBOF.
  • the foregoing data processing request for accessing the target SSD in the JBOF may be understood as processing the data in the target SSD according to the data processing manner indicated by the data processing request, or after the post-processing according to the data processing manner indicated by the data processing request.
  • the data is stored in the target SSD.
  • the data processing request for the above data processing request may include read and write data, EC operations, GC operations, and the like.
  • the foregoing storage controller may be any device having the function of the storage controller, for example, may be the storage controller in FIG. 4, may be the HCI Host in FIG. 5, or may be the Host in FIG.
  • the specific embodiment of the storage controller is not limited.
  • the foregoing SSD may be an NVMe SSD, or may be a (Serial Advanced Technology Attachment, SATA) SSD, which is not specifically limited in this embodiment of the present application.
  • SATA Serial Advanced Technology Attachment
  • the foregoing data processing request may be a request encapsulated based on an interface protocol, for example, may be an NVMe command encapsulated based on an NVMe over fabric (NVMeof) protocol transmitted on a network.
  • NVMeof NVMe over fabric
  • the obtaining the data processing request may include extracting a data processing request from a commit queue shared by the storage controller and the JBOF.
  • the JBOF determines a type of the data processing request, and the type of the data processing request includes a pass-through type and a background computing type.
  • the through-type data processing request is a data processing request that does not need to be processed by the software calculation unit of the JBOF, or a data processing request that needs to be processed by the hardware of the JBOF, or processes the straight-through type data processing.
  • the computing resources required for the request may be provided by a computing unit in the storage controller (eg, a high performance CPU in the storage controller).
  • the background computing type data processing request is a data processing request that needs to be processed by the JBOF computing unit or the computing resource required to process the background computing type data processing request may be calculated by a computing unit in the JBOF (eg, low performance) The CPU) is provided.
  • step 720 includes: if the data processing request is from a direct-through submit queue of the storage controller, the JBOF determines that the type of the data processing request is a pass-through type; If the data processing request is from a background computing type submit queue of the storage controller, the JBOF determines that the type of the data processing request is a background computing type.
  • step 720 includes: if the data processing request is a write request, the JBOF determines that the type of the data processing request is a background computing type; and if the data processing request is a read request, Then the JBOF determines that the type of the data processing request is a straight-through type.
  • determining the type of the data processing request may also be directly determined according to the data processing manner indicated by the data processing request.
  • a write request is a delay-insensitive data processing request, which can be classified into a background computing type data processing request
  • a read request is a delay-sensitive data processing request, and can be classified into a background computing type data processing request.
  • the command queue identifier driver may be set in the storage controller, and the data processing request is sent to the offload engine in the JBOF for offloading according to the traditional method of transmitting the data processing request.
  • the JBOF directly forwards the data processing request to the target SSD.
  • the calculation process required for the above-mentioned through-type data processing request may be performed by the CPU in the storage controller, that is, the SSD in the JBOF may be accessed directly through the above-described through-type data processing request.
  • the calculation process required to determine the storage address of the data to be read by the read request may be calculated by the CPU in the storage controller.
  • JBOF can read data directly from the memory address determined by the CPU in the storage controller.
  • the data processing request is divided into a through data processing request and a background computing type data processing request, wherein the computing resource occupied by the data processing mode indicated by the background computing type data processing request may no longer be
  • the CPU in the storage controller is provided by the computing unit in the JBOF, and to some extent, releases the computing resources of the CPU in the storage controller to execute the background computing type data processing request, so that the CPU in the storage controller can
  • more processing of the through-type data processing request is beneficial to improve the speed of the data processing request by the through-type memory controller, and reduce the delay of the memory controller to execute the straight-through data processing request.
  • step 730 includes: the JBOF extracting the data processing request from a pass-through submission queue in the JBOF, the type of the data processing request is a pass-through type; the JBOF is directly Forwarding the data processing request to the target SSD.
  • the pass-through commit queue in the above JBOF and the through-type commit queue in the storage controller jointly implement the data processing request from the storage controller to the JBOF.
  • the memory controller can store the through-type data processing request in the through-type commit queue of the storage controller, and store the through-type data processing request in the direct-type commit queue of the storage controller through the network. Go to the straight-through commit queue in JBOF to complete the transfer of the straight-through data processing request from the storage controller to the JBOF.
  • the straight-through commit queue in the above BOF and the through-type commit queue in the storage controller form a straight-through commit queue to complete the transfer of the straight-through data processing request from the storage controller to the JBOF.
  • the straight-through commit queues in the JBOFs that together form a straight-through commit queue and the pass-through commit queues in the storage controller are corresponding, that is, the JBOF can process data according to the received straight-through type.
  • the indication information of the through-type submission queue of the storage controller where the request is located determines the through-type submission queue of the JBOF stored in the straight-through type data processing request.
  • the method further includes: if the type of the data processing request is a hardware offload type data processing request in a pass-through type, the hardware processing unit in the JBOF is configured to uninstall the hardware The data processing request is processed, and the processed hardware offload type data processing request is sent to the target SSD.
  • the JBOF sends the data processing request to a computing unit in the JBOF, and sends the data processing request processed by the computing unit to the Target SSD.
  • calculation unit in the above JBOF may be any device with a calculation function in the JBOF, for example, the second processing unit in the above.
  • the n pieces of original data acquired from the storage controller may be encoded by the computing unit CPU set in the JBOF, and finally n+m data is obtained, where n, m is a positive integer. And the finally obtained n+m data is written into the SSD in the JBOF by the calculation unit in the JBOF.
  • the disk selection operation that needs to be performed may also be performed by the computing unit in the above JBOF, and may also be performed by the CPU in the storage controller.
  • the implementation may be performed by other devices that may have the function of selecting a disk.
  • the computing unit in the above JBOF can provide computing resources only for data level calculations (eg, EC operations), and can also provide computing resources for calculation of data management layers (eg, disk selection operations).
  • the data processing request is a GC request
  • the data read and write operations required for performing the GC operation, the calculation and the block erase operation in the SSD may be performed by the calculation unit in the JBOF.
  • step 740 includes: the JBOF extracting the data processing request from a background computing type submission queue in the JBOF, the type of the data processing request is a background computing type; the JBOF Transmitting the data processing request to a computing unit in the JBOF, and transmitting the data processing request processed by the computing unit to the target SSD.
  • the background computing type submission queue in the above JBOF and the background computing type submission queue in the storage controller jointly implement the background computing type data processing request from the storage controller to the JBOF.
  • the storage controller may store the background computing type data processing request in the background computing type submission queue of the storage controller, and execute the background computing type in the background computing type submission queue of the storage controller through the network.
  • the data processing request is stored in the background computing type commit queue in the JBOF to complete the transfer of the background computing type data processing request from the storage controller to the JBOF.
  • the background computing type submit queue in the BOF and the background computing type commit queue in the storage controller together form a background computing type submit queue to complete the background computing type data processing request from the storage controller. Transfer to JBOF.
  • the background computing type submission queue in the BOF that together constitutes a background computing type submission queue and the background calculation type submission queue in the storage controller are corresponding, that is, the JBOF can be calculated according to the received background.
  • the indication information of the background calculation type submission queue of the storage controller in which the data processing request is located determines the background calculation type submission queue of the JBOF stored in the background calculation type data processing request.
  • FIG. 8 is a schematic flowchart of a method for transmitting a data processing request according to an embodiment of the present application. The method shown in Figure 8 includes:
  • the storage controller receives a data processing request, and the data processing request is used to access a target solid state hard disk SSD in the solid state hard disk cluster JBOF controlled by the storage controller.
  • the foregoing data processing request may be a request encapsulated based on an interface protocol, for example, may be an NVMe command encapsulated based on an NVMe over fabric (NVMeof) protocol transmitted on a network.
  • NVMeof NVMe over fabric
  • the obtaining the data processing request may include extracting a data processing request from a submission queue shared by the storage controller and the host.
  • the submission queue shared by the storage controller and the host may include a submission queue on the storage controller and a submission queue of the host, that is, the submission queue shared by the storage controller and the host is a logical level concept.
  • the commit queue on the storage controller and the commit queue of the host are the physical level concepts.
  • the submission queue shared by the storage controller and the host is used to transfer the data processing request that the storage controller needs to perform from the host side to the storage controller side.
  • the storage controller determines a type of the data processing request, and the type of the data processing request includes a pass-through type and a background computing type.
  • the storage controller determines, according to a preset rule, a type of the data processing request, where the preset rule is used to indicate a type corresponding to a different data processing request.
  • the above different data processing requests may refer to data processing requests indicating different data processing manners, such as read requests and write requests.
  • the different data processing requests may also indicate data processing requests sent by different hosts, for example, different priorities.
  • the data processing requests sent by the level Host can belong to different types. This embodiment of the present application does not specifically limit this.
  • the storage controller processes the data processing request, and puts the processed data processing request into a straight-through submission queue of the storage controller. in.
  • the storage controller puts the data processing request into a background computing type submission queue of the storage controller.
  • the type of the submit queue is used to indicate the type of the data processing request, and it can be understood that the type of the submit queue corresponds to the type of the data processing request, and the type of the submit queue includes a through-type submit queue and a background computing-type submit queue, wherein
  • the data processing request stored in the through-type submit queue may be a through-type data processing request, and the data processing request stored in the background computing-type submit queue may be a background computing-type submit queue.
  • the submission queue of the storage controller may be created by the storage controller, and the storage controller may carry indication information in the create submission queue command, where the indication information is used to indicate the type of the submission queue, and the JBOF is created at the time of receipt.
  • the type of the submitted queue can be determined according to the instructions for creating the submit queue command.
  • the data processing request is divided into a through data processing request and a background computing type data processing request, wherein the computing resource occupied by the data processing mode indicated by the background computing type data processing request may no longer be
  • the CPU in the storage controller is provided by the computing unit in the JBOF, and to some extent, releases the computing resources of the CPU in the storage controller to execute the background computing type data processing request, so that the CPU in the storage controller can
  • more processing of the through-type data processing request is beneficial to improve the speed of the data processing request by the through-type memory controller, and reduce the delay of the memory controller to execute the straight-through data processing request.
  • the method further includes: if the type of the data processing request is a read request, the storage controller determines a storage address of the data that the read request is ready to read in the JBOF.
  • the method for transmitting a data processing request in the embodiment of the present application is described in detail below with reference to FIG. 9 and FIG. 10, based on the AFA-based storage system shown in FIG. 4, in which the data processing request is encapsulated as an NVMe command for transmission.
  • FIG. 9 is a schematic flowchart of a method for transmitting an NVMe command according to an embodiment of the present application. The method shown in Figure 9 includes:
  • the command queue identifier driver in the storage controller creates two types of commit queues in the storage controller, and the commit queue is used to transfer the NVMe commands stored in the commit queue to the JBOF.
  • the indication information indicating the queue type may be added to a certain field in the Create submission Queue Command (for example, the field double word 11 (Double Word 11 and Dword 11).
  • the bit in the create commit queue command takes a value of 00b to indicate that the queue type of the submit queue is a pass-through type, and the pass-through NVMe command for storing in the straight-through commit queue; creating a submit queue command
  • the value of the bit is 01b, it is used to indicate that the queue type of the submit queue is a background computing type NVMe command.
  • one of the above two types of submission queues may correspond to at least one submission queue.
  • JBOF can also use the create commit queue command described above to create different types of commit queues in JBOF.
  • the commit queue in JBOF and the commit queue in the storage controller implement data transfer requests from the storage controller to JBOF. .
  • the command queue identifier driver in the storage controller separately creates context information for different types of submission queues, where the context information includes storage addresses occupied by different types of submission queues, and storage addresses occupied by completion queues corresponding to different submission queues. .
  • the command queue identifier driver in the storage controller initializes a straight-through commit queue and a straight-through completion queue.
  • the command queue identifier driver in the storage controller sends the context information of the through-type submission queue to the JBOF to establish a straight-through submission queue corresponding to the direct-type submission queue of the storage controller in the JBOF.
  • the command queue identifier driver in the storage controller initializes a background computing type submit queue and a background computing type completion queue.
  • the command queue identifier driver in the storage processing controller sends the context information of the background computing type submit queue to the JBOF to establish a background computing type submit queue corresponding to the background computing type submit queue of the storage controller in the JBOF.
  • FIG. 10 is a schematic flowchart of a method for transmitting an NVMe command according to an embodiment of the present application. The method shown in Figure 10 includes:
  • the application in the storage controller sends an NVMe command to the command queue identifier driver in the storage controller through the NVMe block device.
  • the command queue identifier driver in the storage controller determines the type of the NVMe command.
  • step 1030 is performed. If the NVMe command is a background computing type, step 1040 is performed.
  • the command queue identifier driver in the storage controller stores the NVMe command in the straight-through commit queue.
  • the command queue identifier driver in the storage controller stores the NVMe command in the background computing type submission queue.
  • the offload engine in JBOF extracts the NVMe command from the submission queue and determines that the type of the submission queue in which the NVMe command is located is a pass-through type or a background calculation type.
  • step 1060 is performed; if the type of the submit queue in which the NVMe command is located is a background calculation type, step 1070 is performed.
  • the offload engine in JBOF sends the NVMe command to the hardware pass-through engine in JBOF, and the hardware pass-through engine accesses the NVMe SSD through the NVMe command.
  • the offload engine in the JBOF sends the NVMe command to the background software processor in the JBOF, and the background software processor processes the data stored in the NVMe SSD or the data carried in the NVMe command according to the manner indicated by the NVMe command.
  • the NVMe command is processed by the background software processor to implement the offloading of the background task performed by the storage controller, generate a new IO request, and access the NVMe SSD through the block device by the new IO request.
  • NVMe SSD performs IO requests from hardware pass-through engines and background software processors.
  • the method for transmitting a data processing request in the embodiment of the present application is described in detail above with reference to FIG. 1 to FIG. 10.
  • the device for transmitting a data processing request in the embodiment of the present application is briefly described below with reference to FIG. 11 to FIG. It is understood that the apparatus shown in FIG. 11 to FIG. 14 can implement the method described above, and for brevity, no further details are provided herein.
  • FIG. 11 is a schematic block diagram of an apparatus for transmitting a data processing request according to an embodiment of the present application.
  • the apparatus 1100 for transmitting a data processing request shown in FIG. 11 includes an obtaining unit 1110, a determining unit 1120, and a processing unit 1130.
  • An obtaining unit configured to acquire a data processing request sent by the storage controller, where the data processing request is used to access a target solid state hard disk SSD in the JBOF;
  • a determining unit configured to determine a type of the data processing request acquired by the acquiring unit, where the type of the data processing request includes a pass-through type and a background computing type;
  • a processing unit configured to forward the data processing request directly to the target SSD if the type of the data processing request is a pass-through type
  • the processing unit is further configured to: if the type of the data processing request is a background computing type, send the data processing request to a computing unit in the JBOF, and send the data processing request processed by the computing unit To the target SSD.
  • the above determining unit may be the shunting engine shown in FIG. 4.
  • the determining unit is specifically configured to: if the data processing request is from a direct-through submit queue of the storage controller, determine that the type of the data processing request is a straight-through type And if the data processing request is from a background computing type submit queue of the storage controller, determining that the type of the data processing request is a background computing type.
  • the processing unit is further configured to: extract the data processing request from a straight-through submission queue in the JBOF, where the type of the data processing request is a straight-through type;
  • the target SSD forwards the data processing request; extracts the data processing request from a background computing type submission queue in the JBOF, the type of the data processing request is a background computing type; and the computing unit in the JBOF Transmitting the data processing request and transmitting the data processing request processed by the computing unit to the target SSD.
  • the device further includes: the determining unit is specifically configured to: if the data processing request is a write request, determine that the type of the data processing request is a background computing type; The data processing request is a read request, and then the type of the data processing request is determined to be a straight-through type.
  • the obtaining unit 1110 may be a transceiver 1240
  • the determining unit 1120 and the processing unit 1130 may be a processor 1220
  • the data verifying device may further include an input/output interface 1230.
  • the memory 1210 as shown in FIG.
  • FIG. 12 is a schematic block diagram of an apparatus for transmitting a data processing request according to another embodiment of the present application.
  • the apparatus 1200 for data verification shown in FIG. 12 may include a memory 1210, a processor 1220, an input/output interface 1230, and a transceiver 1240.
  • the memory 1210, the processor 1220, the input/output interface 1230, and the transceiver 1240 are connected by an internal connection path.
  • the memory 1210 is configured to store instructions for executing the instructions stored by the memory 1220 to control input/
  • the output interface 1230 receives the input data and information, outputs data such as an operation result, and controls the transceiver 1240 to transmit a signal.
  • the transceiver 1240 is configured to acquire a data processing request sent by a storage controller, where the data processing request is used to access a target solid state hard disk SSD in the JBOF;
  • the processor 1220 is configured to determine a type of the data processing request acquired by the acquiring unit, where the type of the data processing request includes a pass-through type and a background computing type;
  • the processor 1220 is configured to forward the data processing request directly to the target SSD if the type of the data processing request is a straight-through type;
  • the processor 1220 is further configured to: if the type of the data processing request is a background computing type, send the data processing request to a computing unit in the JBOF, and process the data processing request by the computing unit Send to the target SSD.
  • the processor 1220 may be a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), or one or more.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • transceiver 1240 also known as a communication interface, utilizes transceivers such as, but not limited to, transceivers to enable communication between terminal 1200 and other devices or communication networks.
  • the memory 1210 can include read only memory and random access memory and provides instructions and data to the processor 1220.
  • a portion of processor 1220 may also include a non-volatile random access memory.
  • the processor 1220 can also store information of the device type.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1220 or an instruction in the form of software.
  • the method for transmitting a data processing request disclosed in the embodiment of the present application may be directly implemented as a hardware processor execution completion, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 1210, and the processor 1220 reads the information in the memory 1210 and performs the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the processor may be a central processing unit (CPU), and the processor may also be other general-purpose processors, digital signal processors (DSPs), and dedicated integration.
  • DSPs digital signal processors
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the apparatus 1300 for transmitting a data processing request shown in FIG. 13 includes a receiving unit 1310, a determining unit 1320, and a processing unit 1330.
  • a receiving unit configured to receive a data processing request, where the data processing request is used to access a target solid state hard disk SSD in the solid state hard disk cluster JBOF controlled by the storage controller;
  • a determining unit configured to determine a type of the data processing request received by the receiving unit, where the type of the data processing request includes a pass-through type and a background calculation type
  • a processing unit configured to: if the type of the data processing request is a pass-through type, process the data processing request, and place the processed data processing request into a straight-through submission queue of the storage controller;
  • the processing unit is further configured to: if the type of the data processing request is a background computing type, put the data processing request into a background computing type submission queue of the storage controller.
  • the receiving unit 1310 may be a transceiver 1440
  • the processing unit 1330 and the determining unit 1320 may be a processor 1420
  • the data verification device may further include an input/output interface 1430.
  • memory 1410 as shown in FIG.
  • FIG. 14 is a schematic block diagram of an apparatus for transmitting a data processing request according to another embodiment of the present application.
  • the apparatus 1400 for transmitting a data processing request shown in FIG. 14 may include a memory 1410, a processor 1420, an input/output interface 1430, and a transceiver 1440.
  • the memory 1410, the processor 1420, the input/output interface 1430 and the transceiver 1440 are connected by an internal connection path for storing instructions for executing the instructions stored by the memory 1420 to control the input/
  • the output interface 1430 receives the input data and information, outputs data such as the operation result, and controls the transceiver 1440 to transmit a signal.
  • the transceiver 1440 is configured to receive a data processing request, where the data processing request is used to access a target solid state hard disk SSD in the solid state hard disk cluster JBOF controlled by the storage controller;
  • the processor 1420 is configured to determine a type of the data processing request received by the receiving unit, where the type of the data processing request includes a pass-through type and a background calculation type.
  • the processor 1420 is configured to: if the type of the data processing request is a pass-through type, process the data processing request, and place the processed data processing request into a straight-through submission queue of the storage controller. ;
  • the processor 1420 is further configured to: if the type of the data processing request is a background computing type, put the data processing request into a background computing type submission queue of the storage controller. It should be understood that, in the embodiment of the present application, the processor 1420 may be a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), or one or more. An integrated circuit for performing related procedures to implement the technical solutions provided by the embodiments of the present application.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • transceiver 1440 also known as a communication interface, utilizes transceivers such as, but not limited to, transceivers to enable communication between terminal 1400 and other devices or communication networks.
  • the memory 1410 can include read only memory and random access memory and provides instructions and data to the processor 1420.
  • a portion of processor 1420 may also include a non-volatile random access memory.
  • the processor 1420 can also store information of the device type.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1420 or an instruction in a form of software.
  • the method for transmitting a data processing request disclosed in the embodiment of the present application may be directly implemented as a hardware processor execution completion, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 1410, and the processor 1420 reads the information in the memory 1410 and, in conjunction with its hardware, performs the steps of the above method. To avoid repetition, it will not be described in detail here.
  • the processor may be a central processing unit (CPU), and the processor may also be other general-purpose processors, digital signal processors (DSPs), and dedicated integration.
  • DSPs digital signal processors
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • B corresponding to A means that B is associated with A, and B can be determined according to A.
  • determining B from A does not mean that B is only determined based on A, and that B can also be determined based on A and/or other information.
  • the size of the sequence numbers of the foregoing processes does not mean the order of execution sequence, and the order of execution of each process should be determined by its function and internal logic, and should not be applied to the embodiment of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be read by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a Digital Video Disc (DVD)), or a semiconductor medium (eg, a Solid State Disk (SSD)). )Wait.
  • a magnetic medium eg, a floppy disk, a hard disk, a magnetic tape
  • an optical medium eg, a Digital Video Disc (DVD)
  • DVD Digital Video Disc
  • SSD Solid State Disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Advance Control (AREA)

Abstract

本申请提供了一种用于传输数据处理请求的方法和装置,该方法包括:固态硬盘簇JBOF获取存储控制器发送的数据处理请求,该数据处理请求用于访问该JBOF中的目标固态硬盘SSD;该JBOF确定该数据处理请求的类型,该数据处理请求的类型包括直通型和后台计算型;若该数据处理请求的类型为直通型,则该JBOF直接向该目标SSD转发该数据处理请求;若该数据处理请求的类型为后台计算型,则该JBOF向该JBOF中的计算单元发送该数据处理请求,并将该计算单元处理后的数据处理请求发送至该目标SSD。有利于提高直通型存储控制器执行数据处理请求的速度,减少存储控制器执行直通型数据处理请求的时延。

Description

用于传输数据处理请求的方法和装置 技术领域
本申请涉及通信领域,并且更具体地,涉及用于传输数据处理请求的方法和装置。
背景技术
目前,全闪存阵列(All Flash Array,AFA)通常由存储控制器(Storage Controller)和固态硬盘(solid state drive,SSD)盘集(Just Bunch of Flash SSD,JBOF)构成,其中,存储控制器用于执行主机(Host)发送的数据处理请求,数据处理请求可以包括IO请求、纠删码(Erasure Code,EC)请求、垃圾回收(Garbage Collecting,GC)请求等。存储控制器可以根据数据处理请求对待处理的数据进行处理,并将处理后的数据存入JBOF中,或将从JBOF中读出的数据作为处理后的数据发送至主机。
JBOF中的SSD每秒进行读写操作的次数(Input/Output Operations Per Second,IOPS)较高,每千兆字节(GB)的IOPS和读写性能相比较于传统的硬盘驱动器(Hard Disk Drive,HDD)的IOPS和读写性能有数倍的增长。
然而,随着客户容量需求的不断增大,存储控制器中需要同时执行的数据处理请求的数量越来越多,但是,存储控制器中处理器的计算能力有限,限制了存储控制器可以同时执行数据处理请求的数量,降低了存储控制器执行数据处理请求的速度,增加了存储控制器执行数据请求的时延。
发明内容
本申请提供一种用于传输数据处理请求的方法和装置,以提高存储控制器执行数据处理请求的速度,有利于降低存储控制器执行数据处理请求的时延。
第一方面,提供了一种用于传输数据处理请求的方法,包括:固态硬盘簇JBOF获取存储控制器发送的数据处理请求,所述数据处理请求用于访问所述JBOF中的目标固态硬盘SSD;所述JBOF确定所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型;若所述数据处理请求的类型为直通型,则所述JBOF直接向所述目标SSD转发所述数据处理请求;若所述数据处理请求的类型为后台计算型,则所述JBOF向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
在本申请实施例中,通过将数据处理请求分为直通型数据处理请求和后台计算型数据处理请求,其中,后台计算型的数据处理请求指示的数据处理方式所占用的计算资源可以不再由存储控制器中的CPU提供,而由JBOF中的计算单元提供,在一定程度上,释放了存储控制器中的CPU执行后台计算型的数据处理请求的计算资源,使得存储控制器中的CPU可以同时更多的处理直通型数据处理请求,有利于提高直通型存储控制器执行数据处理请求的速度,减少存储控制器执行直通型数据处理请求的时延。
可选地,数据处理请求用于指示对数据处理请求中携带的数据或对SSD中存储的数据的处理方式,具体地处理方式可以包括读写数据、EC操作、GC操作等。
可选地,上述SSD可以是NVMe SSD,还可以是SATA SSD,本申请实施例对此不 做具体限定。
可选地,上述数据处理请求可以是基于接口协议封装后的请求,例如,可以是基于在网络上传输的NVMe(NVMe over fabric,NVMeof)协议进行封装的NVMe命令。
结合第一方面,在第一方面的一种可能的实现方式中,所述JBOF确定所述数据处理请求的类型,包括:若所述数据处理请求是来自于所述存储控制器的直通型提交队列的,则所述JBOF确定所述数据处理请求的类型为直通型;若所述数据处理请求是来自于所述存储控制器的后台计算型提交队列的,则所述JBOF确定所述数据处理请求的类型为后台计算型。
通过根据获取数据处理请求的提交队列的类型确定数据处理请求的类型,相对于直接数据处理请求中携带类型的方案,在一定程度上减少了传输数据处理请求的开销。
结合第一方面,在第一方面的一种可能的实现方式中,所述若所述数据处理请求的类型为直通型,则所述JBOF直接向所述目标SSD转发所述数据处理请求,包括:所述JBOF从所述JBOF中的直通型的提交队列中提取所述数据处理请求,所述数据处理请求的类型为直通型;所述JBOF直接向所述目标SSD转发所述数据处理请求;所述若所述数据处理请求的类型为后台计算型,则所述JBOF向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD,包括:所述JBOF从所述JBOF中的后台计算型提交队列中提取所述数据处理请求,所述数据处理请求的类型为后台计算型;所述JBOF向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
结合第一方面,在第一方面的一种可能的实现方式中,所述JBOF确定所述数据处理请求的类型,包括:若所述数据处理请求为写请求,则所述JBOF确定所述数据处理请求的类型为后台计算型;若所述数据处理请求为读请求,则所述JBOF确定所述数据处理请求的类型为直通型。
通过直接根据数据处理请求是读请求还是写请求,确定数据处理请求的类型,以减少对传统数据处理请求的格式或者提交命令队列的格式的变化,在一定程度上可以降低由于上述变化带来的软件或硬件方面的成本。例如,在该方案中可以不改变存储控制器中设置命令队列标识驱动。
第二方面,提供一种用于传输数据处理请求的方法,包括:存储控制器接收数据处理请求,所述数据处理请求用于访问所述存储控制器控制的固态硬盘簇JBOF中的目标固态硬盘SSD;所述存储控制器确定所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型;若所述数据处理请求的类型为直通型,则所述存储控制器对所述数据处理请求进行处理,并将处理后的数据处理请求放入所述存储控制器的直通型的提交队列中;若所述数据处理请求的类型为后台计算型,则所述存储控制器将所述数据处理请求放入所述存储控制器的后台计算型的提交队列中。
在本申请实施例中,通过将数据处理请求分为直通型数据处理请求和后台计算型数据处理请求,其中,后台计算型的数据处理请求指示的数据处理方式所占用的计算资源可以不再由存储控制器中的CPU提供,而由JBOF中的计算单元提供,在一定程度上,释放了存储控制器中的CPU执行后台计算型的数据处理请求的计算资源,使得存储控制器中的CPU可以同时更多的处理直通型数据处理请求,有利于提高直通型存储控制器执行数据处理请求的速度,减少存储控制器执行直通型数据处理请求的时延。
可选地,上述数据处理请求可以是基于接口协议封装后的请求,例如,可以是基于 在网络上传输的NVMe(NVMe over fabric,NVMeof)协议进行封装的NVMe命令。
可选地,上述获取数据处理请求可以包括从存储控制器与主机共享的提交队列中提取数据处理请求。
可选地,作为一个实施例,所述数据处理请求为写请求,所述数据处理请求的类型为后台计算型;所述数据处理请求为读请求,所述数据处理请求的类型为直通型。
通过直接根据数据处理请求是读请求还是写请求,确定数据处理请求的类型,以减少对传统数据处理请求的格式或者提交命令队列的格式的变化,在一定程度上可以降低由于上述变化带来的软件或硬件方面的成本。例如,在该方案中可以不改变存储控制器中设置命令队列标识驱动。
第三方面,提供了一种用于传输数据处理请求的装置,所述装置包括用于执行第一方面或第一方面任一种可能实现方式中的各个模块。
第四方面,提供了一种用于传输数据处理请求的装置,所述装置包括用于执行第二方面或第二方面任一种可能实现方式中的各个模块。
第五方面,提供了一种用于传输数据处理请求的装置,包括收发器、处理器和存储器。该处理器用于控制收发器收发信号,该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得该终端设备执行上述第一方面中的方法。
第六方面,提供了一种用于传输数据处理请求的装置,包括收发器、处理器和存储器。该处理器用于控制收发器收发信号,该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得该网络设备执行第二方面中的方法。
第七方面,提供一种通信装置。该通信装置可以为上述方法设计中的用于传输数据处理请求的装置,或者为设置在用于传输数据处理请求的装置中的芯片。该通信装置包括:存储器,用于存储计算机可执行程序代码;通信接口,以及处理器,处理器与存储器、通信接口耦合。其中存储器所存储的程序代码包括指令,当处理器执行所述指令时,使通信装置执行上述各方面中的方法。
第八方面,提供一种存储系统,所述存储系统包括存储设备和存储控制器,所述存储设备包括上述第三方面中的装置,所述存储控制器包括第四方面中所述的装置。
第九方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述各方面中的方法。
第十方面,提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述各方面中的方法。
附图说明
图1是本申请实施例的全闪存阵列的架构示意图。
图2是基于AFA的存储系统向JBOF中写数据的方法的示意性流程图。
图3是基于AFA的存储系统从JBOF中读数据的方法的示意性流程图。
图4是本申请实施例的一种基于AFA的存储系统的示意性框图。
图5是本申请另一实施例的基于超融合技术的基于AFA的存储系统的示意性框图。
图6是本申请另一实施例的一种基于AFA的存储系统的示意性框图。
图7是本申请实施例的传输数据处理请求的方法的示意性流程图。
图8是本申请实施例的传输数据处理请求的方法的示意性流程图。
图9是本申请实施例的传输NVMe命令的方法的示意性流程图。
图10是本申请实施例的传输NVMe命令的方法的示意性流程图。
图11是本申请实施例的用于传输数据处理请求的装置的示意性框图。
图12是本申请另一实施例的用于传输数据处理请求的装置的示意性框图。
图13是本申请实施例的用于传输数据处理请求的装置的示意性框图。
图14是本申请另一实施例的用于传输数据处理请求的装置的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
图1是基于全闪存阵列的存储系统的示意图。如图所示,在存储系统中包括至少一个主机110、至少一个存储控制器120、交换机130和至少一个JBOF140。其中,至少一个主机中的每个主机可以与至少一个存储控制器相连,至少一个存储控制器可以通过交换机与至少一个JBOF中的任意JBOF进行通信,例如,存储控制器可以对JBOF的存储空间进行访问,或者对JBOF的存储空间中的数据进行处理。
主机(Host):可以申请上述至少一个JBOF中的任意某个JBOF作为目标端(Target),也就是说,主机可以通过发送数据处理请求访问上述目标端,以从上述目标端读取数据,或者将数据写入目标端的存储空间中。
具体地,上述数据处理请求可以是管理命令(Admin Command)或IO请求。主机可以通过管理命令对目标端进行控制,主机还可以通过IO请求访问目标端中的存储空间。
例如,上述目标端可以是快速非易失存储器(Non-Volatile Memory express,NVMe)SSD,主机可以通过NVMe命令(NVMe Command)对NVMe SSD进行控制,还可以通过被封装为NVMe命令的IO请求访问NVMe SSD。
存储控制器(Storage Controller,SC):又称存储处理控制器(Storage Processor Controller,SPC),用于接收主机发送的数据处理请求,并根据数据处理请求对目标端的存储空间中的数据进行处理,或将数据从目标端的存储空间中读出,或者将所述数据处理请求中的数据写入所述目标端。存储控制器包括至少一个CPU(例如,X86架构的高性能CPU)和至少一个缓存,其中,CPU用于对数据处理请求进行计算,缓冲可以用于缓存数据处理请求(例如,写请求)中携带的数据。
需要说明的是,上述缓存可以是备电保护的内存缓冲区(Power Backup Memory Buffer,PBMB)或非易失性内存(Non-volatile Memory,NVM)。PBMB用于缓存
交换机(Switch):用于将存储控制器中的数据处理请求转发至JBOF,或用于将数据处理请求中携带的数据进行汇聚后转发至JBOF。
应理解,上述交换机在不同类型的网络架构中可以是不同类型的具备转发和共享能力的交换机,例如,可以是以太网交换机,无限带宽技术(InfiniBand,IB)交换机,高速串行计算机扩展总线标准(Peripheral Component Interconnect express,PCIe)交换机等,本申请实施例对于交换机的具体类型不做限定。
JBOF:是在一个底板上安装的带有多个SSD的存储设备,在逻辑上把多个物理SSD一个接一个串联在一起,为数据存储提供一个较大的存储空间。
基于上文中描述的存储系统,下文结合图2和图3介绍所述数据处理请求为IO请求时,对IO请求的处理过程。
图2是基于AFA的存储系统向JBOF中写数据的方法的示意性流程图。图2所示的方法包括:
210,主机向存储控制器发送写请求。
具体地,步骤210的实现方式可以是主机将封装为NVMe命令的写请求通过远程直接数据存取(Remote Direct Memory Access,RDMA)的方式存入存储控制器中的提交队列中,以使存储控制器可以从该提交队列中提取写请求。
220,存储控制器对写请求进行解封装,将写请求缓存在本地的PBMB中。
230,存储控制器向主机返回确认字符(Acknowledgement,ACK)表示写请求的操作完成。
具体地,存储控制器可以将ACK封装为NVMe命令,并将封装后的命令存入存储控制器的完成队列中,以便于主机从完成队列中获取ACK确定写请求的操作成功。
需要说明的是,作为主机而言,在存储控制器向服务请求返回ACK后,主机就可以认为写请求的操作流程结束,至于存储控制器对写请求中的数据进行数据处理,并将处理后的数据存入JBOF的存储空间等的后续操作,对于主机而言是不可见的,也就是主机并不关心存储控制器在返回ACK之后的后续操作。
240,存储控制器通过交换机将上述写请求中的数据写入JBOF中。
图3是基于AFA的存储系统从JBOF中读数据的方法的示意性流程图。图3所示的方法包括:
310,主机向存储控制器发送读请求。
具体地,主机将封装为NVMe命令的读请求存入主机的提交队列中,并以RDMA的方式将该读请求存入存储控制器的提交队列中,以便存储控制器从存储控制器的提交队列中提取读请求。
320,存储控制器对读请求进行解封装。
具体地,存储控制器将从提交队列中提取的读请求进行解封装,生成JBOF中SSD可以直接处理的读请求。
330,存储控制器通过交换机将解封装后的读请求发送至JBOF,从JBOF中读取待读取的数据。
340,存储控制器将上述待读取的数据返回主机。
现有技术中,上述写过程和读过程中所需的计算过程,以及其他的数据处理过程所需的计算,都需要占用存储控制器中的CPU的计算资源。然而,随着客户容量需求的不断增大,存储控制器中需要同时执行的数据处理请求的数量越来越多,即使JBOF中SSD的IOPS性能再高可以满足当前的客户容量需求,存储控制器中的处理器也无法同时为大量的数据处理请求提供足够的计算资源,从而限制了可以同时向JBOF发送的IO请求的数量,可能无法满足SSD每秒可以处理的IO请求的数量,也就是说,目前存储控制器中处理器的计算能力不仅无法满足客户容量需求,在一定程度上,还局限了JBOF中SSD的性能。
然而,从上文中描述的AFA的存储系统中的数据读写过程中,可以看出,主机在向JBOF中写数据时,存储控制器将数据写入PBMB后,主机就可以获取存储控制器返回的IO写成功,此时,对于主机而言写过程可以理解为结束,后续存储控制器将数据写入JBOF的过程,主机并不关心。而在读数据的过程中,由于每次的数据读取都需要从JBOF中将数据读出,主机获取数据所需的路径比主机写数据的路径长,也就是说,由于读数 据的路径本身较长,主机等待的时间也就较长,读请求对于时延比较敏感属于时延敏感型数据处理请求,而在写数据的过程中,主机需要等待的时间较短,写请求可以理解为时延不敏感型数据处理请求。
为了解决存储控制器资源有限的问题,本申请实施例基于数据处理请求对传输时延的要求,将数据处理请求分为时延敏感型数据处理请求和时延不敏感型数据处理请求,并利用上述两种类型的数据处理请求对传输时延需求,提供一种传输数据处理请求的方法和装置,其中时延敏感型数据处理请求可以是对传输时延要求较高的数据处理请求,时延不敏感型数据处理请求可以是对传输时延要求较低的数据处理请求。
为了便于理解本申请实施例,首先简单介绍适用于本申请实施例的基于AFA的存储系统。图4是本申请实施例的基于AFA的存储系统的示意性框图。应理解,图4所示的AFA的存储系统可以是基于图1所示的AFA的存储系统改进后的架构,并主要改进AFA的存储系统中的存储控制器和JBOF的结构做了改进,为了简洁,下文中主要介绍存储控制器和JBOF,其他AFA的存储系统相关的单元可以参见上文中的描述。
图4所示的AFA的存储系统包括至少一个存储控制器410,交换机420和至少一个JBOF 430。其中,存储控制器包括命令队列标识驱动(Initiator Driver)单元和块设备管理软件,至少一个JBOF中的每个JBOF中包括分流引擎(Dispatch Engine)、第一处理单元和第二处理单元。
存储控制器中的命令队列标识驱动(Initiator Driver)单元,用于创建存储控制的提交队列(Submission Queue,SQ),该提交队列用于将数据处理请求从存储控制器传输到JBOF的,该提交队列可以包括多种类型,例如,时延敏感型和时延不敏感型。不同类型的提交队列用于存储不同类型的数据处理请求,例如,时延不敏感型的提交队列用于存储时延不敏感型的数据处理请求,时延敏感型的提交队列用于存储时延敏感型的数据处理请求。
上述命令队列标识驱动单元还用于确定数据处理请求的类型为时延敏感型还是时延不敏感型。
需要说明的是,为了实现将将数据处理请求从存储控制器传输到JBOF的提交队列,还需要JBOF在JBOF中创建的与存储控制器的提交队列对应的提交队列,也就是说,存储控制器中的提交队列和JBOF中的提交队列在逻辑上组成了用于将数据处理请求从存储控制器传输到JBOF的提交队列。其中存储控制器中的提交队列可以占用存储控制器中内存中的存储资源,JBOF中的提交队列可以占用JBOF中缓存中的存储空间。
其中,JBOF中还可以包含创建提交队列的单元,通过与存储控制器创建提交队列相同的方式创建不同类型的提交队列。
上述命令队列驱动单元还可以用于建立完成队列(Completion Queue,CQ),该完成队列用于存储针对已完成的数据处理请求的反馈结果。上述命令队列驱动单元还可以为针对不同类型的数据处理请求的反馈结果创建不同类型的完成队列。例如,时延不敏感型的完成队列用于存储针对时延不敏感型的数据处理请求的反馈,时延敏感型的完成队列用于针对存储时延敏感型的数据处理请求的反馈。
存储控制器中的存储空间管理软件用于将从主机接收的数据处理请求转换成JBOF可以直接处理的数据处理请求。例如,上述存储空间管理软件可以是块设备管理软件时,该块设备管理软件可以将从主机接收的数据处理请求中的存储地址转换为包含存储块的存储地址,以便JBOF可以直接处理。
应理解,上述存储空间管理软件可以是块设备管理软件或字符设备管理软件,本申请实施例对此不作具体限定。
JBOF中的分流引擎:用于根据命令队列标识驱动单元确定的数据处理请求的类型将不同类型的数据处理请求发送至用于处理不同类型的数据处理请求的处理单元,即可以将时延敏感型数据处理请求发送到第一处理单元,将时延不敏感型数据处理请求发送至第二处理单元。
应理解,上述分流引擎可以由FPGA或ASIC实现,或通过软件实现。
还应理解,上述分流引擎还包含与交换机相连的端口,例如,支持RDMA的Ethernet的网络接口,用于通过交换机接收命令队列标识驱动单元发送的数据处理请求。
第一处理单元:用于处理上述时延敏感型的数据处理请求和/或硬件卸载型数据处理请求。
具体地,上述硬件卸载型数据处理请求可以理解为不需要通过存储控制器中的硬件进行处理的数据处理请求,该数据处理请求所需的硬件处理过程可以通过第一处理单元中的硬件实现,也就是说,上述硬件卸载型数据处理请求可以是卸载存储控制器中的硬件处理过程的数据处理请求。
若数据处理请求是时延敏感型数据处理请求,所述第一处理单元可以直接将该数据处理请求转发JBOF;若数据处理请求是硬件卸载型数据处理请求,则第一处理单元可以利用自身的处理性能,将上述时延敏感型数据处理请求转化为JBOF可以直接处理的数据处理请求。例如,第一处理单元是FPGA时,可以利用FPGA的低时延、硬件化处理的优势,将上述时延敏感型数据处理请求转化为JBOF可以直接处理的数据处理请求,然后把结果返回给存储控制器。
从另一方面理解,上述时延敏感型数据处理请求还可以属于JBOF可以直接处理的数据处理请求,上述后台卸载型数据处理请求还可以属于JBOF不可以直接处理的数据处理请求。
应理解,第一处理单元可以由现场可编程门阵列(Field-Programmable Gate Array,FPGA)或为专门目的而设计的集成电路(Application Specific Integrated Circuit,ASIC)实现。
需要说明的是,若第一处理单元接收的时延敏感型数据处理请求如果JBOF中的SSD可以直接处理的,则第一处理单元可以将该时延敏感型数据处理请求直接转发给JBOF,也就是说,第一处理单元可以将时延敏感型数据处理请求透传给JBOF,由JBOF执行该时延敏感型数据处理请求。
还应理解,第一处理单元可以和分流引擎集成在一个物理器件中,还可以分别设置在两个不同的物理器件中,本申请实施例对此不做具体限定。
第二处理单元:用于处理上述时延不敏感型的数据处理请求,可以根据数据处理请求指示的数据处理方式对数据处理请求中携带的数据进行处理,或对JBOF的存储空间中存储的数据按照数据处理请求指示的数据处理方法进行数据处理。例如,对数据处理请求中携带的数据进行EC计算,或对JBOF中的存储空间进行GC操作。
应理解,第二处理单元可以由至少一个廉价的CPU(或低性能的CPU),例如,可以是先进精简指令集处理器(Advanced Reduced Instruction Set Computer Machines,ARM)核或者无内部互锁流水级的微处理器(Microprocessor without interlocked piped stages,MIPS)核。
通过在JBOF中设置廉价的CPU用作第二处理单元,帮助存储处理控制单元中的CPU处理第二处理单元,在减小存储处理控制单元中的CPU的压力的同时,在一定程度上,降低改进JBOF的架构所需的成本。
可以看出,图4中的JBOF对比图3中的JBOF而言,又新增了基于数据处理请求的类型对数据处理请求进行分流,以及执行时延不敏感型数据处理请求所需的计算过程等功能,比图3中的JBOF的功能更加智能,因此图4中的JBOF又称为智能JBOF(intelligent Bunch of Flash,JBOF)。也就是说,下文中具有基于数据处理请求的类型对数据处理请求进行分流,以及执行时延不敏感型数据处理请求所需的计算过程等功能的JBOF都可以称为iBOF。
应理解,上述JBOF中的各个功能单元可以是集成在一个片上系统(System on Chip,SoC)中,该SoC可以包含一个CPU,可以用作第二处理单元,还可以包含一个FPGA或ASIC可以用作分流引擎和第一处理单元。上述JBOF还可以是由分离的硬件实现的,也就是说用作第二处理单元的CPU和用作分流引擎和第一处理单元的FPGA或ASIC是两个独立的硬件。
还应理解,图1和图4仅示出了一种可能的AFA的存储系统,本申请实施例还可以适用于其他AFA的存储系统。应理解,为了简洁,下列AFA的存储系统中各单元的作用可以参见上文中的描述。
例如,图5是本申请另一实施例的基于超融合技术的AFA的存储系统的示意性框图。在图5所示的基于超融合(Hyper-Converged Infrastructure,HCI)技术的AFA的存储系统中,包括至少一个HCI主机510、至少一个JBOF520和交换机530。
HCI主机510,可以理解为Host和SPC的混合体,也就是说,Host以超融合形态与SPC部署在一起形成的一个新的主机。应理解,HCI主机510仅仅改变了Host和SPC之间的部署方式,但是依然可以实现上文中描述的Host和SPC原本可以实现的各个功能。
JBOF520,即上文中提到的智能JBOF,该JBOF可以通过交换机与HCI主机相连。
可选地,上述JBOF与HCI主机之间还可以直接通过通信线(例如,总线)相连,而不再需要经由交换机相连,本申请实施例对于JBOF与HCI主机之间具体的连接方式不做限定。
又例如,图6是本申请另一实施例的一种AFA的存储系统的示意性框图。图6所示的AFA的存储系统中包括至少一个主机610和至少一个JBOF620。
主机610,包括上文中存储控制器中的各个功能模块,例如命令队列标识软件,除了可以实现上述主机本来的各功能之外,还可以实现上述存储控制器的各个功能。与JBOF位于不同的单元设备中,主机与JBOF之间可以直接通过通信线(例如,总线)相连,而不再需要经由交换机相连。
可选地,上述JBOF与主机之间还可以经由交换机相连,本申请实施例对于JBOF与HCI主机之间具体的连接方式不做限定。
需要说明的是,上文中仅仅从数据处理请求的传输时延的角度将数据处理请求分为时延敏感型数据处理请求和时延不敏感型数据处理请求,本申请实施例还可以根据JBOF对数据处理请求的处理方式,将数据处理请求分为直通型和后台计算型,其中直通型的数据处理请求可以理解为不需要经过所述JBOF的计算单元处理的数据处理请求,后台计算型数据处理请求为需要经过所述JBOF的计算单元处理的数据处理请求。也就是说,上文中的时延敏感型数据处理请求和硬件卸载型数据处理请求可以属于直通型数据处理 请求,上文中的时延不敏感型数据处理请求可以属于后台计算型数据处理请求。相应地,上述第一处理单元又可以称为硬件直通引擎(Hardware Bypass Engine),上述第二处理单元又可以称为后台软件处理器(Background Software Processor)。
下文以JBOF对数据处理请求的处理方式对数据处理请求进行分类的方式为例,结合上文中任意一种AFA的存储系统,介绍本申请实施例的传输数据处理请求的方法。
图7是本申请实施例的传输数据处理请求的方法的示意性流程图。图7所示的方法包括:
710,固态硬盘簇JBOF获取存储控制器发送的数据处理请求,所述数据处理请求用于访问所述JBOF中的目标固态硬盘SSD。
具体地,上述数据处理请求用于访问JBOF中的目标SSD可以理解为根据数据处理请求指示的数据处理方式对目标SSD中的数据进行处理,或根据数据处理请求指示的数据处理方式将后处理后的数据存入目标SSD。
上述数据处理请求用于指示的数据处理方式可以包括读写数据、EC操作、GC操作等。
应理解,上述存储控制器可以是具备存储控制器功能的任何设备,例如可以是图4中的存储控制器,可以是图5中的HCI Host,还可以是图6中的Host,本申请实施例对存储控制器的具体体现形式不做限定。
还应理解,上述SSD可以是NVMe SSD,还可以是(Serial Advanced Technology Attachment,SATA)SSD,本申请实施例对此不做具体限定。
可选地,上述数据处理请求可以是基于接口协议封装后的请求,例如,可以是基于在网络上传输的NVMe(NVMe over fabric,NVMeof)协议进行封装的NVMe命令。
可选地,上述获取数据处理请求可以包括从存储控制器与JBOF共享的提交队列中提取数据处理请求。
720,所述JBOF确定所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型。
具体地,上述直通型数据处理请求为不需要经过所述JBOF的软件计算单元处理的数据处理请求,或者需要经过所述JBOF的硬件进行处理的数据处理请求,或者处理所述直通型的数据处理请求所需的计算资源可以由存储控制器中的计算单元(例如,存储控制器中的高性能CPU)提供。
上述后台计算型数据处理请求为需要经过所述JBOF的计算单元处理的数据处理请求或者,处理所述后台计算型的数据处理请求所需的计算资源可以由JBOF中的计算单元(例如,低性能的CPU)提供。
可选地,作为一个实施例,步骤720包括:若所述数据处理请求是来自于所述存储控制器的直通型提交队列的,则所述JBOF确定所述数据处理请求的类型为直通型;若所述数据处理请求是来自于所述存储控制器的后台计算型提交队列的,则所述JBOF确定所述数据处理请求的类型为后台计算型。
可选地,作为一个实施例,步骤720包括:若所述数据处理请求为写请求,则所述JBOF确定所述数据处理请求的类型为后台计算型;若所述数据处理请求为读请求,则所述JBOF确定所述数据处理请求的类型为直通型。
具体地,上述确定数据处理请求的类型还可以直接根据数据处理请求指示的数据处理方式直接确定。例如,写请求属于时延不敏感型数据处理请求,可以归类于后台计算 型数据处理请求;读请求属于时延敏感型数据处理请求,可以归类于后台计算型数据处理请求。
通过直接根据数据处理请求是读请求还是写请求,确定数据处理请求的类型,以减少对传统数据处理请求的格式或者提交命令队列的格式的变化,在一定程度上可以降低由于上述变化带来的软件或硬件方面的成本。例如,在该方案中可以不改变存储控制器中设置命令队列标识驱动,按照传统传输数据处理请求的方式,将数据处理请求发送至JBOF中的分流引擎进行分流。
需要说明的是,上述在存储控制器中不设置命令队列标识驱动的方案,可以适用数据处理请求仅包括写请求和读请求的方案。
730,若所述数据处理请求的类型为直通型,则所述JBOF直接向所述目标SSD转发所述数据处理请求。
需要说明的是,上述直通型数据处理请求所需的计算过程可以是由存储控制器中的CPU执行的,也就是说,可以直接通过上述直通型数据处理请求对JBOF中的SSD进行访问。
例如,上述直通型数据处理请求为读请求时,确定该读请求准备读取的数据所在的存储地址所需的计算过程,可以由存储控制器中的CPU进行计算。JBOF可以直接从存储控制器中的CPU确定的存储地址中读取数据。
在本申请实施例中,通过将数据处理请求分为直通型数据处理请求和后台计算型数据处理请求,其中,后台计算型的数据处理请求指示的数据处理方式所占用的计算资源可以不再由存储控制器中的CPU提供,而由JBOF中的计算单元提供,在一定程度上,释放了存储控制器中的CPU执行后台计算型的数据处理请求的计算资源,使得存储控制器中的CPU可以同时更多的处理直通型数据处理请求,有利于提高直通型存储控制器执行数据处理请求的速度,减少存储控制器执行直通型数据处理请求的时延。
可选地,作为一个实施例,步骤730包括:所述JBOF从所述JBOF中的直通型的提交队列中提取所述数据处理请求,所述数据处理请求的类型为直通型;所述JBOF直接向所述目标SSD转发所述数据处理请求。
需要说明的是,上述JBOF中的直通型的提交队列与存储控制器中的直通型的提交队列共同实现将数据处理请求从存储控制器传输到JBOF中。具体来说,存储控制器可以将直通型的数据处理请求存入存储控制器的直通型的提交队列中,在通过网络将存储控制器的直通型的提交队列中的直通型的数据处理请求存到JBOF中的直通型的提交队列中,以完成将直通型的数据处理请求从存储控制器传输到JBOF中。从逻辑上来看,上述BOF中的直通型的提交队列和存储控制器中的直通型的提交队列共同组成一个直通型提交队列,以完成将直通型的数据处理请求从存储控制器传输到JBOF中。
还应理解,上述共同组成一个直通型提交队列的JBOF中的直通型的提交队列和存储控制器中的直通型的提交队列是对应的,也就是说,JBOF可以根据接收的直通型的数据处理请求所在的存储控制器的直通型的提交队列的指示信息,确定存入该直通型的数据处理请求的JBOF的直通型的提交队列。
可选地,作为一个实施例,所述方法还包括:若所述数据处理请求的类型为直通型中的硬件卸载型数据处理请求,则所述JBOF中的硬件处理单元对所述硬件卸载型数据处理请求进行处理,并将处理后的硬件卸载型数据处理请求发送至所述目标SSD。
740,若所述数据处理请求的类型为后台计算型,则所述JBOF向所述JBOF中的计 算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
需要说明的是,上述JBOF中的计算单元可以是JBOF中任何具备计算功能的装置,例如,上文中的第二处理单元。
例如,上述数据处理请求为EC请求时,可以通过上述设置在JBOF中的计算单元CPU,对从存储控制器中获取的n份原始数据进行编码,最终得到n+m份数据,其中,n、m为正整数。并由JBOF中的计算单元将最终得到的n+m份数据通过写请求写入JBOF中的SSD中。
还应理解,上述在将n+m份数据通过写请求写入JBOF中的SSD中时,需要进行的选盘操作也可以由上述JBOF中的计算单元执行,还可以由存储控制器中的CPU执行,还可以由其他可以具有选盘功能的装置进行,本申请实施例对此不做具体限定。也就是说,上述JBOF中的计算单元可以仅为数据层面的计算(例如,EC操作)提供计算资源,还可以为数据管理层面(例如,选盘操作)的计算提供计算资源。
又例如,上述数据处理请求为GC请求时,执行GC操作所需的数据读写操作,计算以及SSD中块擦除的操作可以由上述JBOF中的计算单元执行。
可选地,作为一个实施例,步骤740包括:所述JBOF从所述JBOF中的后台计算型提交队列中提取所述数据处理请求,所述数据处理请求的类型为后台计算型;所述JBOF向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
需要说明的是,上述JBOF中的后台计算型的提交队列与存储控制器中的后台计算型的提交队列共同实现将后台计算型的数据处理请求从存储控制器传输到JBOF中。具体来说,存储控制器可以将后台计算型的数据处理请求存入存储控制器的后台计算型的提交队列中,在通过网络将存储控制器的后台计算型的提交队列中的后台计算型的数据处理请求存到JBOF中的后台计算型的提交队列中,以完成将后台计算型的数据处理请求从存储控制器传输到JBOF中。从逻辑上来看,上述BOF中的后台计算型的提交队列和存储控制器中的后台计算型的提交队列共同组成一个后台计算型提交队列,以完成将后台计算型的数据处理请求从存储控制器传输到JBOF中。
还应理解,上述共同组成一个后台计算型提交队列的BOF中的后台计算型的提交队列和存储控制器中的后台计算型的提交队列是对应的,也就是说,JBOF可以根据接收的后台计算型的数据处理请求所在的存储控制器的后台计算型的提交队列的指示信息,确定存入该后台计算型的数据处理请求的JBOF的后台计算型的提交队列。
图8是本申请实施例的传输数据处理请求的方法的示意性流程图。图8所示的方法包括:
810,存储控制器接收数据处理请求,所述数据处理请求用于访问所述存储控制器控制的固态硬盘簇JBOF中的目标固态硬盘SSD。
可选地,上述数据处理请求可以是基于接口协议封装后的请求,例如,可以是基于在网络上传输的NVMe(NVMe over fabric,NVMeof)协议进行封装的NVMe命令。
可选地,上述获取数据处理请求可以包括从存储控制器与主机共享的提交队列中提取数据处理请求。
需要说明的是,上述存储控制器与主机共享的提交队列可以包括存储控制器上的提交队列和主机的提交队列,也就是说,上述存储控制器与主机共享的提交队列是逻辑层 面上的概念,而存储控制器上的提交队列和主机的提交队列是物理层面的概念。存储控制器与主机共享的提交队列用于将存储控制器需要执行的数据处理请求,从主机端传输到存储控制器端。
820,所述存储控制器确定所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型。
可选地,存储控制器根据预设规则确定所述数据处理请求的类型,该预设规则用于指示不同的数据处理请求对应的类型。
需要说明的是,上述不同的数据处理请求可以指指示不同数据处理方式的数据处理请求,例如读请求和写请求;上述不同的数据处理请求还可以指示不同Host发送的数据处理请求,例如不同优先级的Host发送的数据处理请求可以属于不同的类型。本申请实施例对此不做具体限定。
830,若所述数据处理请求的类型为直通型,则所述存储控制器对所述数据处理请求进行处理,并将处理后的数据处理请求放入所述存储控制器的直通型的提交队列中。
840,若所述数据处理请求的类型为后台计算型,则所述存储控制器将所述数据处理请求放入所述存储控制器的后台计算型的提交队列中。
具体地,提交队列的类型用于指示所述数据处理请求的类型,可以理解为提交队列的类型与数据处理请求的类型对应,提交队列的类型包括直通型提交队列和后台计算型提交队列,其中直通型提交队列中存储的数据处理请求可以是直通型数据处理请求,上述后台计算型提交队列中存储的数据处理请求可以是后台计算型提交队列。
需要说明的是,上述存储控制器的提交队列可以是存储控制器创建的,存储控制器可以在创建提交队列命令中携带指示信息,该指示信息用于指示提交队列的类型,JBOF在接收到创建提交队列命令后,可以根据创建提交队列命令的指示信息确定创建的提交队列的类型。
在本申请实施例中,通过将数据处理请求分为直通型数据处理请求和后台计算型数据处理请求,其中,后台计算型的数据处理请求指示的数据处理方式所占用的计算资源可以不再由存储控制器中的CPU提供,而由JBOF中的计算单元提供,在一定程度上,释放了存储控制器中的CPU执行后台计算型的数据处理请求的计算资源,使得存储控制器中的CPU可以同时更多的处理直通型数据处理请求,有利于提高直通型存储控制器执行数据处理请求的速度,减少存储控制器执行直通型数据处理请求的时延。
可选地,作为一个实施例,所述方法还包括:若所述数据处理请求的类型为读请求,则所述存储控制器确定所述读请求准备读取的数据在JBOF中的存储地址。
下文基于图4所示的基于AFA的存储系统,以上述数据处理请求被封装为NVMe命令进行传输为例,结合图9和图10详细描述本申请实施例的传输数据处理请求的方法。
图9是本申请实施例的传输NVMe命令的方法的示意性流程图。图9所示的方法包括:
910,存储控制器中的命令队列标识驱动在存储控制器中创建两种类型的提交队列,该提交队列用于将存储在提交队列中的NVMe命令传输至JBOF中。
具体地,可以在创建提交队列命令(Create Submission Queue Command)中的某一字段(例如,字段双字11(Double Word11,Dword11)中添加指示队列类型的指示信息。具体的添加方式可以参见表1,其中,创建提交队列命令中的比特位取值为00b时用于指示该提交队列的队列类型为直通型,并且直通型提交队列中用于存储的直通型的 NVMe命令;创建提交队列命令中的比特位取值为01b时用于指示该提交队列的队列类型为后台计算型的NVMe命令。
表1
Figure PCTCN2018104054-appb-000001
需要说明的是,上述两种类型的提交队列中的某一种类型的提交队列可以对应至少一个提交队列。
相应地,JBOF也可以使用与上述创建提交队列命令,在JBOF中创建不同类型的提交队列,JBOF中的提交队列与存储控制器中的提交队列一起实现数据处理请求从存储控制器到JBOF的传输。
920,存储控制器中的命令队列标识驱动分别为不同类型的提交队列创建上下文信息,该上下文信息包含了不同类型的提交队列占用的存储地址,以及不同的提交队列对应的完成队列占用的存储地址。
930,存储控制器中的命令队列标识驱动初始化直通型的提交队列以及直通型的完成队列。
具体地,存储控制器中的命令队列标识驱动将直通型提交队列的上下文信息发送至JBOF中,以便在JBOF中建立与存储控制器的直通型提交队列对应的直通型提交队列。
940,存储控制器中的命令队列标识驱动初始化后台计算型的提交队列以及后台计算型的完成队列。
具体地,存储处理控制器中的命令队列标识驱动将后台计算型提交队列的上下文信息发送至JBOF中,以便在JBOF中建立与存储控制器的后台计算型提交队列对应的后台计算型提交队列。
下文以NVMe命令为IO请求为例,结合图10详细描述本申请实施例的传输NVMe命令的方法。图10是本申请实施例的传输NVMe命令的方法的示意性流程图。图10所示的方法包括:
1010,存储控制器中的应用通过NVMe块设备向存储控制器中的命令队列标识驱动发送NVMe命令。
1020,存储控制器中的命令队列标识驱动判断NVMe命令的类型。
具体地,若NVMe命令是直通型,则执行步骤1030。若NVMe命令是后台计算型,则执行步骤1040。
1030,存储控制器中的命令队列标识驱动将NVMe命令存入直通型提交队列中。
1040,存储控制器中的命令队列标识驱动将NVMe命令存入后台计算型提交队列。
1050,JBOF中的分流引擎从提交队列中提取NVMe命令,并确定NVMe命令所在的提交队列的类型为直通型或后台计算型。
具体地,若NVMe命令所在的提交队列的类型是直通型,则执行步骤1060;若NVMe 命令所在的提交队列的类型是后台计算型,则执行步骤1070。
1060,JBOF中的分流引擎将NVMe命令发送至JBOF中的硬件直通引擎,由硬件直通引擎通过NVMe命令访问NVMe SSD。
1070,JBOF中的分流引擎将NVMe命令发送至JBOF中的后台软件处理器,由后台软件处理器根据NVMe命令指示的方式对NVMe SSD中存储的数据或NVMe命令中携带的数据进行处理。
具体地,由后台软件处理器对NVMe命令进行处理,实现对存储控制器执行的后台任务的卸载,生成新的IO请求,并将新的IO请求通过块设备访问NVMe SSD。
1080,NVMe SSD执行来自硬件直通引擎和后台软件处理器发送的IO请求。
上文结合图1至图10详细地说明了本申请实施例的用于传输数据处理请求的方法,下文结合图11至图14简单介绍本申请实施例的用于传输数据处理请求的装置,应理解,图11至图14中所示的装置可以实现上文中描述的方法,为了简洁,在此不再赘述。
图11是本申请实施例的用于传输数据处理请求的装置的示意性框图。图11所示的用于传输数据处理请求的装置1100包括:获取单元1110、确定单元1120和处理单元1130。
获取单元,用于获取存储控制器发送的数据处理请求,所述数据处理请求用于访问所述JBOF中的目标固态硬盘SSD;
确定单元,用于确定所述获取单元获取的所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型;
处理单元,用于若所述数据处理请求的类型为直通型,则直接向所述目标SSD转发所述数据处理请求;
所述处理单元,还用于若所述数据处理请求的类型为后台计算型,则向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
需要说明的是,上述确定单元可以是图4中所示的分流引擎。
可选地,作为一个实施例,所述确定单元具体用于:若所述数据处理请求是来自于所述存储控制器的直通型提交队列的,则确定所述数据处理请求的类型为直通型;若所述数据处理请求是来自于所述存储控制器的后台计算型提交队列的,则确定所述数据处理请求的类型为后台计算型。
可选地,作为一个实施例,所述处理单元具体还用于:从所述JBOF中的直通型的提交队列中提取所述数据处理请求,所述数据处理请求的类型为直通型;直接向所述目标SSD转发所述数据处理请求;从所述JBOF中的后台计算型提交队列中提取所述数据处理请求,所述数据处理请求的类型为后台计算型;向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
可选地,作为一个实施例,所述装置还包括:所述确定单元具体用于:若所述数据处理请求为写请求,则确定所述数据处理请求的类型为后台计算型;若所述数据处理请求为读请求,则确定所述数据处理请求的类型为直通型。
在可选的实施例中,所述获取单元1110可以为收发机1240,所述确定单元1120和所述处理单元1130可以为处理器1220,所述数据校验装置还可以包括输入/输出接口1230和存储器1210,具体如图12所示。
图12是本申请另一实施例的用于传输数据处理请求的装置的示意性框图。图12所 示的数据校验的装置1200可以包括:存储器1210、处理器1220、输入/输出接口1230、收发机1240。其中,存储器1210、处理器1220、输入/输出接口1230和收发机1240通过内部连接通路相连,该存储器1210用于存储指令,该处理器1220用于执行该存储器1220存储的指令,以控制输入/输出接口1230接收输入的数据和信息,输出操作结果等数据,并控制收发机1240发送信号。
所述收发机1240,用于获取存储控制器发送的数据处理请求,所述数据处理请求用于访问所述JBOF中的目标固态硬盘SSD;
所述处理器1220,用于确定所述获取单元获取的所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型;
所述处理器1220,用于若所述数据处理请求的类型为直通型,则直接向所述目标SSD转发所述数据处理请求;
所述处理器1220,还用于若所述数据处理请求的类型为后台计算型,则向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
应理解,在本申请实施例中,该处理器1220可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。
还应理解,收发机1240又称通信接口,使用例如但不限于收发器一类的收发装置,来实现终端1200与其它设备或通信网络之间的通信。
该存储器1210可以包括只读存储器和随机存取存储器,并向处理器1220提供指令和数据。处理器1220的一部分还可以包括非易失性随机存取存储器。例如,处理器1220还可以存储设备类型的信息。
在实现过程中,上述方法的各步骤可以通过处理器1220中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的用于传输数据处理请求的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1210,处理器1220读取存储器1210中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该处理器可以为中央处理单元(central processing unit,CPU),该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
图13是本申请实施例的用于传输数据处理请求的装置的示意性框图,图13所示的用于传输数据处理请求的装置1300包括:接收单元1310、确定单元1320和处理单元1330。
接收单元,用于接收数据处理请求,所述数据处理请求用于访问所述存储控制器控制的固态硬盘簇JBOF中的目标固态硬盘SSD;
确定单元,用于确定所述接收单元接收的所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型,
处理单元,用于若所述数据处理请求的类型为直通型,则对所述数据处理请求进行处理,并将处理后的数据处理请求放入所述存储控制器的直通型的提交队列中;
所述处理单元,还用于若所述数据处理请求的类型为后台计算型,则将所述数据处理请求放入所述存储控制器的后台计算型的提交队列中。
在可选的实施例中,所述接收单元1310可以为收发机1440,所述处理单元1330和所述确定单元1320可以为处理器1420,所述数据校验装置还可以包括输入/输出接口1430和存储器1410,具体如图14所示。
图14是本申请另一实施例的用于传输数据处理请求的装置的示意性框图。图14所示的用于传输数据处理请求的装置1400可以包括:存储器1410、处理器1420、输入/输出接口1430、收发机1440。其中,存储器1410、处理器1420、输入/输出接口1430和收发机1440通过内部连接通路相连,该存储器1410用于存储指令,该处理器1420用于执行该存储器1420存储的指令,以控制输入/输出接口1430接收输入的数据和信息,输出操作结果等数据,并控制收发机1440发送信号。
收发机1440,用于接收数据处理请求,所述数据处理请求用于访问所述存储控制器控制的固态硬盘簇JBOF中的目标固态硬盘SSD;
处理器1420,用于确定所述接收单元接收的所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型,
处理器1420,用于若所述数据处理请求的类型为直通型,则对所述数据处理请求进行处理,并将处理后的数据处理请求放入所述存储控制器的直通型的提交队列中;
处理器1420,还用于若所述数据处理请求的类型为后台计算型,则将所述数据处理请求放入所述存储控制器的后台计算型的提交队列中。应理解,在本申请实施例中,该处理器1420可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。
还应理解,收发机1440又称通信接口,使用例如但不限于收发器一类的收发装置,来实现终端1400与其它设备或通信网络之间的通信。
该存储器1410可以包括只读存储器和随机存取存储器,并向处理器1420提供指令和数据。处理器1420的一部分还可以包括非易失性随机存取存储器。例如,处理器1420还可以存储设备类型的信息。
在实现过程中,上述方法的各步骤可以通过处理器1420中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的用于传输数据处理请求的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1410,处理器1420读取存储器1410中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该处理器可以为中央处理单元(central processing unit,CPU),该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵 列(field programmable gate array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
应理解,在本申请实施例中,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(Digital Subscriber Line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够读取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字通用光盘(Digital Video Disc,DVD))或者半导体介质(例如,固态硬盘(Solid State Disk,SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (11)

  1. 一种用于传输数据处理请求的方法,其特征在于,包括:
    固态硬盘簇JBOF获取存储控制器发送的数据处理请求,所述数据处理请求用于访问所述JBOF中的目标固态硬盘SSD;
    所述JBOF确定所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型;
    若所述数据处理请求的类型为直通型,则所述JBOF直接向所述目标SSD转发所述数据处理请求;
    若所述数据处理请求的类型为后台计算型,则所述JBOF向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
  2. 如权利要求1所述的方法,其特征在于,所述JBOF确定所述数据处理请求的类型,包括:
    若所述数据处理请求是来自于所述存储控制器的直通型提交队列的,则所述JBOF确定所述数据处理请求的类型为直通型;
    若所述数据处理请求是来自于所述存储控制器的后台计算型提交队列的,则所述JBOF确定所述数据处理请求的类型为后台计算型。
  3. 如权利要求1所述的方法,其特征在于,所述若所述数据处理请求的类型为直通型,则所述JBOF直接向所述目标SSD转发所述数据处理请求,包括:
    所述JBOF从所述JBOF中的直通型的提交队列中提取所述数据处理请求,所述数据处理请求的类型为直通型;
    所述JBOF直接向所述目标SSD转发所述数据处理请求;
    所述若所述数据处理请求的类型为后台计算型,则所述JBOF向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD,包括:
    所述JBOF从所述JBOF中的后台计算型提交队列中提取所述数据处理请求,所述数据处理请求的类型为后台计算型;
    所述JBOF向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
  4. 如权利要求1所述的方法,其特征在于,所述JBOF确定所述数据处理请求的类型,包括:
    若所述数据处理请求为写请求,则所述JBOF确定所述数据处理请求的类型为后台计算型;
    若所述数据处理请求为读请求,则所述JBOF确定所述数据处理请求的类型为直通型。
  5. 一种用于传输数据处理请求的方法,其特征在于,包括:
    存储控制器接收数据处理请求,所述数据处理请求用于访问所述存储控制器控制的固态硬盘簇JBOF中的目标固态硬盘SSD;
    所述存储控制器确定所述数据处理请求的类型,所述数据处理请求的类型包括直通 型和后台计算型;
    若所述数据处理请求的类型为直通型,则所述存储控制器对所述数据处理请求进行处理,并将处理后的数据处理请求放入所述存储控制器的直通型的提交队列中;
    若所述数据处理请求的类型为后台计算型,则所述存储控制器将所述数据处理请求放入所述存储控制器的后台计算型的提交队列中。
  6. 一种用于传输数据处理请求的装置,其特征在于,包括:
    获取单元,用于获取存储控制器发送的数据处理请求,所述数据处理请求用于访问所述JBOF中的目标固态硬盘SSD;
    确定单元,用于确定所述获取单元获取的所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型;
    处理单元,用于若所述数据处理请求的类型为直通型,则直接向所述目标SSD转发所述数据处理请求;
    所述处理单元,还用于若所述数据处理请求的类型为后台计算型,则向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
  7. 如权利要求6所述的装置,其特征在于,所述确定单元具体用于:
    若所述数据处理请求是来自于所述存储控制器的直通型提交队列的,则确定所述数据处理请求的类型为直通型;
    若所述数据处理请求是来自于所述存储控制器的后台计算型提交队列的,则确定所述数据处理请求的类型为后台计算型。
  8. 如权利要求6所述的装置,其特征在于,所述处理单元具体还用于:
    从所述JBOF中的直通型的提交队列中提取所述数据处理请求,所述数据处理请求的类型为直通型;
    直接向所述目标SSD转发所述数据处理请求;
    从所述JBOF中的后台计算型提交队列中提取所述数据处理请求,所述数据处理请求的类型为后台计算型;
    向所述JBOF中的计算单元发送所述数据处理请求,并将所述计算单元处理后的数据处理请求发送至所述目标SSD。
  9. 如权利要求6所述的装置,其特征在于,所述确定单元具体用于:
    若所述数据处理请求为写请求,则确定所述数据处理请求的类型为后台计算型;
    若所述数据处理请求为读请求,则确定所述数据处理请求的类型为直通型。
  10. 一种用于传输数据处理请求的装置,其特征在于,包括:
    接收单元,用于接收数据处理请求,所述数据处理请求用于访问所述存储控制器控制的固态硬盘簇JBOF中的目标固态硬盘SSD;
    确定单元,用于确定所述接收单元接收的所述数据处理请求的类型,所述数据处理请求的类型包括直通型和后台计算型,
    处理单元,用于若所述数据处理请求的类型为直通型,则对所述数据处理请求进行处理,并将处理后的数据处理请求放入所述存储控制器的直通型的提交队列中;
    所述处理单元,还用于若所述数据处理请求的类型为后台计算型,则将所述数据处理请求放入所述存储控制器的后台计算型的提交队列中。
  11. 一种存储系统,其特征在于,所述存储系统包括存储设备和存储控制器,所述存储设备包括如权利要求6-9中任一项所述的装置,所述存储控制器包括如权利要求10所述的装置。
PCT/CN2018/104054 2017-09-05 2018-09-05 用于传输数据处理请求的方法和装置 WO2019047834A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP18852841.8A EP3660686B1 (en) 2017-09-05 2018-09-05 Method and device for transmitting data processing request
EP22153995.0A EP4071620A1 (en) 2017-09-05 2018-09-05 Method and apparatus for transmitting data processing request
US16/808,968 US11169743B2 (en) 2017-09-05 2020-03-04 Energy management method and apparatus for processing a request at a solid state drive cluster
US17/508,443 US20220050636A1 (en) 2017-09-05 2021-10-22 Method and Apparatus for Transmitting Data Processing Request

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710792438.1A CN107728936B (zh) 2017-09-05 2017-09-05 用于传输数据处理请求的方法和装置
CN201710792438.1 2017-09-05

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/808,968 Continuation US11169743B2 (en) 2017-09-05 2020-03-04 Energy management method and apparatus for processing a request at a solid state drive cluster

Publications (1)

Publication Number Publication Date
WO2019047834A1 true WO2019047834A1 (zh) 2019-03-14

Family

ID=61205667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/104054 WO2019047834A1 (zh) 2017-09-05 2018-09-05 用于传输数据处理请求的方法和装置

Country Status (4)

Country Link
US (2) US11169743B2 (zh)
EP (2) EP4071620A1 (zh)
CN (2) CN112214166B (zh)
WO (1) WO2019047834A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021157588A (ja) * 2020-03-27 2021-10-07 株式会社日立製作所 分散ストレージシステム及び記憶制御方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214166B (zh) 2017-09-05 2022-05-24 华为技术有限公司 用于传输数据处理请求的方法和装置
CN111857943B (zh) * 2019-04-30 2024-05-17 华为技术有限公司 数据处理的方法、装置与设备
CN110113425A (zh) * 2019-05-16 2019-08-09 南京大学 一种基于rdma网卡纠删码卸载的负载均衡系统及均衡方法
CN111064680B (zh) * 2019-11-22 2022-05-17 华为技术有限公司 一种通信装置及数据处理方法
CN111930299B (zh) * 2020-06-22 2024-01-26 中国建设银行股份有限公司 分配存储单元的方法及相关设备
CN116670636A (zh) * 2021-01-30 2023-08-29 华为技术有限公司 数据存取方法、装置和存储介质
CN112883041B (zh) * 2021-02-23 2024-03-08 北京百度网讯科技有限公司 一种数据更新方法、装置、电子设备及存储介质
CN113382281A (zh) * 2021-05-14 2021-09-10 尧云科技(西安)有限公司 一种基于jbof的视频数据处理方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298561A (zh) * 2011-08-10 2011-12-28 北京百度网讯科技有限公司 一种对存储设备进行多通道数据处理的方法、系统和装置
US20120066449A1 (en) * 2010-09-15 2012-03-15 John Colgrove Scheduling of reconstructive i/o read operations in a storage environment
CN102750257A (zh) * 2012-06-21 2012-10-24 西安电子科技大学 基于访问信息调度的片上多核共享存储控制器
CN103370685A (zh) * 2010-09-15 2013-10-23 净睿存储股份有限公司 存储环境中的i/o写入的调度
CN107728936A (zh) * 2017-09-05 2018-02-23 华为技术有限公司 用于传输数据处理请求的方法和装置

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7433948B2 (en) * 2002-01-23 2008-10-07 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network
US8832142B2 (en) 2010-08-30 2014-09-09 Oracle International Corporation Query and exadata support for hybrid columnar compressed data
US8713268B2 (en) * 2010-08-05 2014-04-29 Ut-Battelle, Llc Coordinated garbage collection for raid array of solid state disks
CN103890724B (zh) * 2011-08-19 2017-04-19 株式会社东芝 信息处理设备、用于控制信息处理设备的方法、主机装置、以及用于外部存储装置的性能评估方法
CN103019622B (zh) * 2012-12-04 2016-06-29 华为技术有限公司 一种数据的存储控制方法、控制器、物理硬盘,及系统
US8601206B1 (en) 2013-03-14 2013-12-03 DSSD, Inc. Method and system for object-based transactions in a storage system
US9009397B1 (en) * 2013-09-27 2015-04-14 Avalanche Technology, Inc. Storage processor managing solid state disk array
US9092321B2 (en) * 2013-07-24 2015-07-28 NXGN Data, Inc. System and method for performing efficient searches and queries in a storage node
US10180948B2 (en) * 2013-11-07 2019-01-15 Datrium, Inc. Data storage with a distributed virtual array
US9887008B2 (en) * 2014-03-10 2018-02-06 Futurewei Technologies, Inc. DDR4-SSD dual-port DIMM device
CN104951239B (zh) * 2014-03-26 2018-04-10 国际商业机器公司 高速缓存驱动器、主机总线适配器及其使用的方法
US9294567B2 (en) * 2014-05-02 2016-03-22 Cavium, Inc. Systems and methods for enabling access to extensible storage devices over a network as local storage via NVME controller
WO2015172391A1 (zh) * 2014-05-16 2015-11-19 华为技术有限公司 快速数据读写方法和装置
CN104298620A (zh) * 2014-10-10 2015-01-21 张维加 一种耐擦写低能耗的外接计算机加速设备
US10114778B2 (en) 2015-05-08 2018-10-30 Samsung Electronics Co., Ltd. Multi-protocol IO infrastructure for a flexible storage platform
CN107844268B (zh) * 2015-06-04 2021-09-14 华为技术有限公司 一种数据分发方法、数据存储方法、相关装置以及系统
CN104991745B (zh) * 2015-07-21 2018-06-01 浪潮(北京)电子信息产业有限公司 一种存储系统数据写入方法和系统
US10425484B2 (en) * 2015-12-16 2019-09-24 Toshiba Memory Corporation Just a bunch of flash (JBOF) appliance with physical access application program interface (API)
US20180024964A1 (en) * 2016-07-19 2018-01-25 Pure Storage, Inc. Disaggregated compute resources and storage resources in a storage system
US10423487B2 (en) * 2016-08-19 2019-09-24 Samsung Electronics Co., Ltd. Data protection offloads using SSD peering
TWI597665B (zh) * 2016-12-27 2017-09-01 緯創資通股份有限公司 在一儲存系統中更新軟體的方法及儲存系統
US10255134B2 (en) * 2017-01-20 2019-04-09 Samsung Electronics Co., Ltd. Control plane method and apparatus for providing erasure code protection across multiple storage devices
CN108733209A (zh) 2018-03-21 2018-11-02 北京猎户星空科技有限公司 人机交互方法、装置、机器人和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120066449A1 (en) * 2010-09-15 2012-03-15 John Colgrove Scheduling of reconstructive i/o read operations in a storage environment
CN103370685A (zh) * 2010-09-15 2013-10-23 净睿存储股份有限公司 存储环境中的i/o写入的调度
CN102298561A (zh) * 2011-08-10 2011-12-28 北京百度网讯科技有限公司 一种对存储设备进行多通道数据处理的方法、系统和装置
CN102750257A (zh) * 2012-06-21 2012-10-24 西安电子科技大学 基于访问信息调度的片上多核共享存储控制器
CN107728936A (zh) * 2017-09-05 2018-02-23 华为技术有限公司 用于传输数据处理请求的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3660686A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021157588A (ja) * 2020-03-27 2021-10-07 株式会社日立製作所 分散ストレージシステム及び記憶制御方法
JP7167078B2 (ja) 2020-03-27 2022-11-08 株式会社日立製作所 分散ストレージシステム及び記憶制御方法

Also Published As

Publication number Publication date
CN112214166A (zh) 2021-01-12
CN107728936B (zh) 2020-10-09
CN107728936A (zh) 2018-02-23
EP3660686A1 (en) 2020-06-03
US20220050636A1 (en) 2022-02-17
US20200201578A1 (en) 2020-06-25
US11169743B2 (en) 2021-11-09
CN112214166B (zh) 2022-05-24
EP3660686B1 (en) 2022-02-16
EP4071620A1 (en) 2022-10-12
EP3660686A4 (en) 2020-08-19

Similar Documents

Publication Publication Date Title
WO2019047834A1 (zh) 用于传输数据处理请求的方法和装置
US9727503B2 (en) Storage system and server
CN110647480B (zh) 数据处理方法、远程直接访存网卡和设备
US9696942B2 (en) Accessing remote storage devices using a local bus protocol
WO2018076793A1 (zh) 一种NVMe数据读写方法及NVMe设备
US8868804B2 (en) Unified I/O adapter
US10983920B2 (en) Customizable multi queue DMA interface
WO2019047843A1 (zh) 用于传输数据处理请求的方法和装置
KR102365312B1 (ko) 스토리지 컨트롤러, 연산 스토리지 장치, 및 연산 스토리지 장치의 동작 방법
WO2019057005A1 (zh) 数据校验的方法、装置以及网卡
WO2014202003A1 (zh) 数据存储系统的数据传输方法、装置及系统
WO2017173618A1 (zh) 压缩数据的方法、装置和设备
US8055817B2 (en) Efficient handling of queued-direct I/O requests and completions
US20230325277A1 (en) Memory controller performing selective and parallel error correction, system including the same and operating method of memory device
US11983115B2 (en) System, device and method for accessing device-attached memory
JP7247405B2 (ja) ストレージコントローラ、計算ストレージ装置及び計算ストレージ装置の動作方法
WO2020029619A1 (zh) 数据处理的方法、设备和服务器
WO2022252590A1 (zh) 数据包处理方法及装置
WO2022073399A1 (zh) 存储节点、存储设备及网络芯片
CN114415985A (zh) 一种基于数控分离架构的存储数据处理单元
CN117033272A (zh) 缓存管理方法、装置、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18852841

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018852841

Country of ref document: EP

Effective date: 20200225

NENP Non-entry into the national phase

Ref country code: DE