CN113468083B - Dual-port NVMe controller and control method - Google Patents

Dual-port NVMe controller and control method Download PDF

Info

Publication number
CN113468083B
CN113468083B CN202110751453.8A CN202110751453A CN113468083B CN 113468083 B CN113468083 B CN 113468083B CN 202110751453 A CN202110751453 A CN 202110751453A CN 113468083 B CN113468083 B CN 113468083B
Authority
CN
China
Prior art keywords
command
host
dma
nvme
circuit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110751453.8A
Other languages
Chinese (zh)
Other versions
CN113468083A (en
Inventor
张泽
王祎磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Starblaze Technology Co ltd
Original Assignee
Chengdu Starblaze Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Starblaze Technology Co ltd filed Critical Chengdu Starblaze Technology Co ltd
Priority to CN202110751453.8A priority Critical patent/CN113468083B/en
Publication of CN113468083A publication Critical patent/CN113468083A/en
Application granted granted Critical
Publication of CN113468083B publication Critical patent/CN113468083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • G06F13/368Handling requests for interconnection or transfer for access to common bus or bus system with decentralised access control
    • G06F13/376Handling requests for interconnection or transfer for access to common bus or bus system with decentralised access control using a contention resolving method, e.g. collision detection, collision avoidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Systems (AREA)
  • Multi Processors (AREA)

Abstract

The application provides a dual-port NVMe controller and a control method, wherein the dual-port NVMe controller comprises a first host interface and a second host interface which are respectively connected with a first host and a second host and are respectively used for receiving a first NVMe command sent by the first host and a second NVMe command sent by the second host; the host command processing unit comprises a first host command processing branch and a second host command processing branch; the first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; and at least one shared memory for storing the first and second NVMe commands. The technical scheme of the method and the device can be applied to the dual-port mode, and can avoid collision.

Description

Dual-port NVMe controller and control method
Technical Field
The present application relates generally to the field of data processing. More particularly, the present application relates to a dual port NVMe controller and control method.
Background
FIG. 1A illustrates a block diagram of a solid state storage device. The solid state storage device 102 is coupled to a host for providing storage capability for the host. The host and solid state storage device 102 may be coupled by a variety of means including, but not limited to, connecting the host to the solid state storage device 102 via, for example, SATA (Serial Advanced Technology Attachment ), SCSI (Small Computer System Interface, small computer system interface), SAS (Serial Attached SCSI ), IDE (Integrated Drive Electronics, integrated drive electronics), USB (Universal Serial Bus ), PCIE (Peripheral Component Interconnect Express, PCIE, peripheral component interconnect Express), NVMe (NVM Express), ethernet, fibre channel, wireless communications network, and the like. The host may be an information processing device capable of communicating with the storage device in the manner described above, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, or the like. The storage device 102 (hereinafter, solid-state storage device will be simply referred to as storage device) includes an interface 103, a control section 104, one or more NVM chips 105, and a DRAM (Dynamic Random Access Memory ) 110.
The NVM chip 105 described above includes NAND flash memory, phase change memory, feRAM (Ferroelectric RAM, ferroelectric memory), MRAM (Magnetic Random Access Memory, magnetoresistive memory), RRAM (Resistive Random Access Memory, resistive memory), and the like, which are common storage media.
The interface 103 may be adapted to exchange data with a host by way of, for example, SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.
The control unit 104 is used for controlling data transmission among the interface 103, the NVM chip 105 and the DRAM110, and also for memory management, host logical address to flash physical address mapping, erase balancing, bad block management, etc. The control component 104 can be implemented in a variety of ways, such as software, hardware, firmware, or a combination thereof, for example, the control component 104 can be in the form of an FPGA (Field-programmable gate array, field programmable gate array), an ASIC (Application Specific Integrated Circuit, application-specific integrated circuit), or a combination thereof. The control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO (Input/Output) commands. Control unit 104 may also be coupled to DRAM110 to access data of DRAM 110. FTL tables and/or cached data of IO commands may be stored in the DRAM.
The control section 104 issues a command to the NVM chip 105 in a manner conforming to the interface protocol of the NVM chip 105 to operate the NVM chip 105, and receives a command execution result output from the NVM chip 105. Known NVM chip interface protocols include "Toggle", "ONFI", and the like.
The memory Target (Target) is one or more Logical Units (LUNs) of shared CE (Chip Enable) signals within the NAND flash package. One or more dies (Die) may be included within the NAND flash package. Typically, the logic unit corresponds to a single die. The logic cell may include multiple planes (planes). Multiple planes within a logic unit may be accessed in parallel, while multiple logic units within a NAND flash memory chip may execute commands and report status independently of each other.
Data is typically stored and read on a storage medium on a page basis. While data is erased in blocks. A block (also called a physical block) contains a plurality of pages. A block contains a plurality of pages. Pages on a storage medium (referred to as physical pages) have a fixed size, e.g., 17664 bytes. The physical pages may also have other sizes.
FTL (Flash Translation Layer ) is utilized in the storage device 102 to maintain mapping information from logical addresses (LBAs) to physical addresses. The logical addresses constitute the storage space of the solid state storage device as perceived by upper level software such as the operating system. The physical address is an address for accessing a physical storage unit of the solid state storage device. Address mapping may also be implemented in the related art using an intermediate address modality. For example, logical addresses are mapped to intermediate addresses, which in turn are further mapped to physical addresses. The table structure storing mapping information from logical addresses to physical addresses is called FTL table. FTL tables are important metadata in a storage device. The data items of the FTL table record address mapping relations in units of data units in the storage device.
The host accesses the storage device in IO commands that follow the storage protocol. The control component generates one or more media interface commands based on the IO commands from the host and provides the media interface commands to the media interface controller. The media interface controller generates storage media access commands (e.g., program commands, read commands, erase commands) that follow the interface protocol of the NVM chip according to the media interface commands. The control unit also keeps track of all media interface commands generated from one IO command being executed and indicates to the host the result of processing the IO command.
Referring to fig. 1B, the control part includes a host interface 1041, a host command processing unit 1042, a storage command processing unit 1043, a medium interface controller 1044, and a storage medium management unit 1045. The host interface 1041 acquires an IO command provided by the host. The host command processing unit 1042 generates a storage command from the IO command and supplies the storage command to the storage command processing unit 1043. The store commands may access the same size of memory space, e.g., 4KB. The data unit of the data accessed by the corresponding one of the storage commands recorded in the NVM chip is referred to as a data frame. The physical page records one or more frames of data. For example, a physical page is 17664 bytes in size and a data frame is 4KB in size, and one physical page can store 4 data frames.
The storage medium management unit 1045 maintains a logical address to physical address conversion for each storage command. For example, the storage medium management unit 1045 includes FTL tables (FTL will be explained later). For a read command, the storage medium management unit 1045 outputs a physical address corresponding to a logical address (LBA) accessed by the storage command. For a write command, the storage medium management unit 1045 allocates an available physical address thereto, and records a mapping relationship of a logical address (LBA) to which it accesses and the allocated physical address. The storage medium management unit 1045 also maintains functions required to manage NVM chips, such as garbage collection, wear leveling, and the like.
The storage command processing unit 1043 operates the medium interface controller 1044 to issue a storage medium access command to the NVM chip 105 according to the physical address supplied from the storage medium management unit 1045.
For the sake of clarity, the command sent by the host to the storage device 102 is referred to as an IO command, the command sent by the host command processing unit 1042 to the storage command processing unit 1043 is referred to as a storage command, the command sent by the storage command processing unit 1043 to the media interface controller 1044 is referred to as a media interface command, and the command sent by the media interface controller 1044 to the NVM chip 105 is referred to as a storage media access command. The storage medium access command follows the interface protocol of the NVM chip.
In the NVMe protocol, after receiving a write command, the solid-state storage device 102 obtains data from the memory of the host through the host interface 1041, and then writes the data into the flash memory. For a read command, after the data is read from flash memory, solid state storage device 102 will move the data into host memory through host interface 1041.
The data transferred between the host and the storage device is described in two ways: one is PRP (Physical Region Page, physical area page), and the other is SGL (Scatter/Gather List). PRP is a number of PRP entries linked together, each PRP entry being a 64-bit memory physical address describing a physical Page (Page) space. The SGL is a linked list and consists of one or more SGL segments, and each SGL segment consists of one or more SGL descriptors; each SGL descriptor describes the address and length of the data cache, i.e., each SGL descriptor corresponds to a host memory address space; each SGL descriptor has a fixed size (e.g., 16 bytes).
Whether PRP or SGL, essentially describes one or more address spaces in host memory, where the locations of these address spaces in host memory are arbitrary. The host carries PRP or SGL related information in the NVMe command telling the storage device where the data source should be in the host memory or where the data read from the flash memory should be placed in the host memory.
A basic construction of a prior art host command processing unit 1042 is shown in fig. 1C. In the prior art, when processing an IO command, the host command processing unit 1042 needs to obtain a corresponding SGL or PRP from the host according to the IO command, and analyze the SGL or PRP to determine a corresponding host memory address. As shown in fig. 1C, the host command processing unit 1042 mainly includes a shared memory, a DMA module, and a sub-CPU system. The sub-CPU system comprises a plurality of CPUs, wherein the CPUs are used for running programs to process SGL or PRP and configuring a DMA module. The DMA module is used for processing the DMA command and implementing data transmission between the host and the storage device. Shared memory (share memory) is used to store data, NVMe commands, etc.
Fig. 1D illustrates the basic structure of a memory device from another perspective, the memory device including an interface (corresponding to interface 103 in fig. 1B), a host command processing unit (corresponding to host command processing unit 1042 in fig. 1B), a DRAM (corresponding to DRAM 110 in fig. 1B), a bus, and a back-end module. The host command processing unit, the back-end module and the DRAM interact through a bus. Wherein the back-end modules correspond to the storage command processing unit 1043 and the media interface controller 1044 in fig. 1B. Fig. 1D is mainly used to illustrate that, for a host command processing unit that processes host commands, portions other than processing host commands may be collectively referred to as a back-end module.
With the development of SSD technology, dual port technology began to appear. As shown in fig. 2A, in a dual-port application scenario, two host systems (a first host and a second host) access the same storage device using different ports, respectively, for example, the first host accesses the storage device via a PCIe interface and port 0, and the second host accesses the storage device via a PCIe interface and port 1. For a storage device, in dual port mode, it has the ability to interact with two hosts at the same time.
Disclosure of Invention
The purpose of the present application is to achieve data transmission between a host and a storage device in the dual port mode of fig. 2A, and how the storage device processes NVMe commands received from two hosts is an important link of data transmission between the storage device and the host.
By way of example, fig. 2B shows a schematic structural diagram of a dual port NVMe controller. As shown in fig. 2B, the storage device includes a first host interface, a second host interface, a host command processing unit, a storage command processing unit, and a media interface controller. The processing of the first host command from the first host interface and the processing of the second host command from the second host interface by the host command processing unit. Although the scheme can realize the dual-port function, as two host interfaces simultaneously use the same host command processing unit, the IO of the two host interfaces can preempt resources, and the conflict is generated.
In order to realize the dual port mode on the premise of avoiding collision, the application provides additional hardware for NVMe command processing of the second host, namely a first host interface and a first host command processing branch are provided for the first host, a second host interface and a second host command processing branch are provided for the second host, and the hardware used by the first host command processing branch and the hardware used by the second host command processing branch are mutually independent; such that the first host command processing branch is dedicated to processing a first NVMe command of the first host and the second host command processing branch is dedicated to processing a second NVMe command of the second host. Therefore, the technical scheme of the application can realize the dual-port mode on the premise of avoiding conflict.
Furthermore, the two sets of host command processing branches can share the shared memory, and if only one host is connected with the storage device, one set of hardware device which is not connected with the host can be disabled, so that hardware resources are saved.
Further, for NVMe write commands, the first host command processing leg and the second host command processing leg may share a CPU and a back-end module. The DMA transfer circuit notifies the CPU after transferring the data of the write command from the host to the memory of the storage device, and the back-end module is operated by the CPU to write the data into the flash memory (NVM). For example, the CPU may schedule and process at a data granularity of 4KB, so the CPU does not need to care which host's NVMe commands the processed data belongs to, and thus even one larger NVMe command does not block other NVMe commands for a long time, so that the preemption effect between two hosts is reduced.
According to a first aspect of the present application, there is provided a first dual port NVMe controller according to the first aspect of the present application, comprising: the first host interface and the second host interface are respectively connected with the first host and the second host and are respectively used for receiving a first NVMe command sent by the first host and a second NVMe command sent by the second host; the host command processing unit comprises a first host command processing branch and a second host command processing branch; the first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; and at least one shared memory for storing the first and second NVMe commands.
According to a first dual-port NVMe controller of the first aspect of the present application, there is provided a second dual-port NVMe controller according to the first aspect of the present application, the shared memory comprising a first shared memory: the first shared memory is connected with the first host command processing branch and the second host command processing branch and is used for storing the first NVMe command and the second NVMe command.
According to a first dual-port NVMe controller of a first aspect of the present application, there is provided a third dual-port NVMe controller according to the first aspect of the present application, the shared memory including a first shared memory and a second shared memory: the first shared memory is connected with the first host command processing branch, and the second shared memory is connected with the second host command processing branch; the first shared memory is used for storing the first NVMe command, and the second shared memory is used for storing the second NVMe command.
According to the first dual-port NVMe controller of the first aspect of the present application, there is provided a fourth dual-port NVMe controller according to the first aspect of the present application, further comprising a closing control unit for: closing the first host interface and the first host command processing branch in response to the notification of the closing of the first host interface; and closing the second host interface and the second host command processing branch in response to the notification of the second host interface closing.
According to a first dual port NVMe controller of the first aspect of the present application, there is provided a fifth dual port NVMe controller according to the first aspect of the present application, the closing control unit further being for: releasing the storage space occupied by the first NVMe command in the shared memory in response to the notification of the closing of the first host interface; and responsive to the notification of the second host interface shutdown, releasing storage space occupied by the second NVMe command in the shared memory.
According to any one of the first to fifth dual-port NVMe controllers of the first aspect of the present application, there is provided a sixth dual-port NVMe controller according to the first aspect of the present application, wherein the first host command processing leg comprises a first SGL and/or PRP unit, a first write initiation circuit and a first DMA transfer circuit: the first SGL and/or PRP unit responds to the received first NVMe command, acquires SGL and/or PRP corresponding to the first NVMe command, generates one or more first DMA commands according to the SGL and/or PRP, and stores the one or more first DMA commands in a shared memory; the first write initiating circuit responds to the completion of one or more first DMA command storage corresponding to a first NVMe command and sends a first DMA command index to the first DMA transmission circuit; the first DMA transmission circuit acquires the one or more first DMA commands from the shared memory according to the first DMA command index, and moves data from the first host according to the acquired one or more first DMA commands.
According to a sixth dual port NVMe controller of the first aspect of the present application, there is provided a seventh dual port NVMe controller according to the first aspect of the present application, the second host command processing leg comprises a second SGL and/or PRP unit, a second write initiation circuit and a second DMA transfer circuit: the second SGL and/or PRP unit is used for responding to the received second NVMe command, acquiring SGL and/or PRP corresponding to the second NVMe command, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in a shared memory; the second write initiating circuit responds to the completion of one or more second DMA command storage corresponding to a second NVMe command and sends a second DMA command index to the second DMA transmission circuit; the second DMA transmission circuit acquires the one or more second DMA commands from the shared memory according to the second DMA command index, and moves data from the second host according to the acquired one or more second DMA commands.
The seventh dual-port NVMe controller according to the first aspect of the present application provides the eighth dual-port NVMe controller according to the first aspect of the present application, further comprising at least one processor module connecting the first DMA transfer circuit and the second DMA transfer circuit for: after the first DMA transfer circuit moves the data indicated by the one or more first DMA commands to the memory of the storage device and/or after the second DMA transfer circuit moves the data indicated by the one or more second DMA commands to the memory of the storage device, the processor module controls the back-end module to write the corresponding data to the NVM.
According to a seventh dual-port NVMe controller of the first aspect of the present application, there is provided a ninth dual-port NVMe controller according to the first aspect of the present application, the first DMA transfer circuit, the second DMA transfer circuit being coupled with a memory of a storage device through a bus; the closing control unit is further configured to: responding to the notice of closing the first host interface, judging whether the first DMA transmission circuit is performing data transmission, and if the data transmission exists, controlling the bus to restart or abandon the current data transmission; and responding to the notice of closing the second host interface, judging whether the second DMA transmission circuit is performing data transmission, and if the second DMA transmission circuit is performing data transmission, controlling the bus to restart or abandon the current data transmission.
According to any one of the first to fifth dual-port NVMe controllers of the first aspect of the present application, there is provided a tenth dual-port NVMe controller according to the first aspect of the present application, the first host command processing leg including a first SGL and/or PRP unit, and a first DMA transfer circuit; the second host command processing branch comprises a second SGL and/or PRP unit and a second DMA transfer circuit; further comprising at least one read initiate circuit; the first SGL and/or PRP unit is configured to obtain and parse the first NVMe command to obtain a corresponding SGL and/or PRP, generate one or more first DMA commands according to the SGL and/or PRP, and store the one or more first DMA commands in a shared memory; the second SGL and/or PRP unit is configured to obtain and parse the second NVMe command to obtain a corresponding SGL and/or PRP, generate one or more second DMA commands according to the SGL and/or PRP, and store the one or more second DMA commands in a shared memory; the read initiate circuit requesting the back-end module to move data indicated by the one or more first DMA commands or the one or more second DMA commands from the NVM to the storage device memory; and providing the first DMA command index to the first DMA transfer circuit or the second DMA command index to the second DMA transfer circuit in response to data of at least one first DMA command or at least one second DMA command being moved into storage device memory; the first DMA transmission circuit acquires at least one corresponding first DMA command from the shared memory according to the first DMA command index received from the reading initiating circuit, and moves data to the first host according to the acquired at least one first DMA command; the second DMA transmission circuit acquires at least one corresponding second DMA command from the shared memory according to the second DMA command index received from the reading initiating circuit, and moves data to a second host according to the acquired at least one second DMA command.
According to a tenth dual port NVMe controller of the first aspect of the present application, there is provided an eleventh dual port NVMe controller according to the first aspect of the present application, the read initiate circuit comprising a first read initiate circuit and a second read initiate circuit: the first read initiating circuit is used for requesting the back-end module to move the data indicated by one or more first DMA commands corresponding to the first NVMe commands from the NVM to the memory of the storage device; and providing the first DMA command index to the first DMA transfer circuit in response to at least one first DMA command data being moved into a storage device memory; the second read initiating circuit is configured to request the back-end module to move data indicated by one or more second DMA commands corresponding to the second NVMe command from the NVM to the memory of the storage device; and providing the second DMA command index to the second DMA transfer circuit in response to at least one second DMA command data being moved into storage device memory.
According to an eleventh dual-port NVMe controller of the first aspect of the present application, there is provided a twelfth dual-port NVMe controller according to the first aspect of the present application, when one of the first host interface and the second host interface is connected to the host, the first read initiation circuit and the second read initiation circuit jointly control the first DMA transfer circuit or the second DMA transfer circuit.
According to any one of the tenth to twelfth dual-port NVMe controllers of the first aspect of the present application, there is provided a thirteenth dual-port NVMe controller according to the first aspect of the present application, the first read initiate circuit and the second read initiate circuit each including a CPU.
According to a tenth dual port NVMe controller of the first aspect of the present application, there is provided a fourteenth dual port NVMe controller according to the first aspect of the present application, the first DMA transfer circuit, the second DMA transfer circuit being coupled with a memory of a storage device through a bus; the closing control unit is further configured to: responding to the notice of closing the first host interface, judging whether the first DMA transmission circuit is performing data transmission, and if the data transmission exists, controlling the bus to restart or abandon the current data transmission; and responding to the notice of closing the second host interface, judging whether the second DMA transmission circuit is performing data transmission, and if the second DMA transmission circuit is performing data transmission, controlling the bus to restart or abandon the current data transmission.
According to a second aspect of the present application, there is provided a control method of a first dual-port NVMe controller according to the second aspect of the present application, the dual-port NVMe controller being used for connecting two hosts, comprising: processing a first NVMe command from a first host through a first host interface and a first host command processing branch; processing a second NVMe command from a second host through a second host interface and a second host command processing branch; and storing the first and second NVMe commands through at least one shared memory.
According to a control method of the first dual-port NVMe controller of the second aspect of the present application, there is provided a control method of the second dual-port NVMe controller of the second aspect of the present application, wherein the first NVMe command and the NVMe command are stored through one shared memory; or respectively and correspondingly storing the first NVMe command and the NVMe command through two shared memories.
According to a control method of the first dual-port NVMe controller of the second aspect of the present application, a control method of the third dual-port NVMe controller of the second aspect of the present application is provided, when a notification of closing of the first host interface is received, closing the first host interface and the first host command processing branch; and closing the second host interface and the second host command processing branch when receiving the notification of closing of the second host interface.
According to the control method of the third dual-port NVMe controller in the second aspect of the application, the control method of the fourth dual-port NVMe controller in the second aspect of the application is provided, and when a notification of closing of a first host interface is received, the storage space occupied by an NVMe command of a first host in the shared memory is released; and when receiving a notification of closing of the second host interface, releasing the storage space occupied by the NVMe command of the second host in the shared memory.
According to a control method of any one of the first to fourth dual-port NVMe controllers of the second aspect of the present application, there is provided a control method of the fifth dual-port NVMe controller according to the second aspect of the present application, for processing a first NVMe command from a first host through a first host interface and a first host command processing branch, comprising: in response to a received first NVMe command, acquiring an SGL and/or a PRP corresponding to the first NVMe command, generating one or more first DMA commands according to the SGL and/or the PRP, and storing the one or more first DMA commands in a shared memory; transmitting a first DMA command index in response to completion of storage of one or more first DMA commands corresponding to one first NVMe command; and acquiring the one or more first DMA commands from the shared memory according to the first DMA command index, and moving data from the first host according to the acquired one or more first DMA commands.
According to a control method of a fifth dual-port NVMe controller of the second aspect of the present application, there is provided a control method of a sixth dual-port NVMe controller of the second aspect of the present application, for processing a second NVMe command from a second host through a second host interface and a second host command processing branch, wherein the method includes: in response to the received second NVMe command, acquiring an SGL and/or a PRP corresponding to the second NVMe command, generating one or more second DMA commands according to the SGL and/or the PRP, and storing the one or more second DMA commands in a shared memory; transmitting a second DMA command index in response to completion of storage of one or more second DMA commands corresponding to the one second NVMe command; and acquiring the one or more second DMA commands from the shared memory according to the second DMA command index, and moving data from the second host according to the acquired one or more second DMA commands.
According to the control method of the sixth dual-port NVMe controller of the second aspect of the present application, there is provided the control method of the seventh dual-port NVMe controller according to the second aspect of the present application, further comprising: after the data indicated by the one or more first DMA commands is moved to the memory of the storage device, controlling the back-end module to write the data into the NVM; and after the data indicated by the one or more second DMA commands is moved to the memory of the storage device, controlling the back-end module to write the data to the NVM.
According to a control method of a sixth dual-port NVMe controller in a second aspect of the present application, a control method of an eighth dual-port NVMe controller in the second aspect of the present application is provided, and in response to a notification that the first host interface or the second host interface is closed, whether the corresponding DMA transfer circuit is performing data transfer is judged, and if there is data transfer, the control bus is restarted or discards the current data transfer.
According to a control method of any one of the first to fourth dual-port NVMe controllers of the second aspect of the present application, there is provided a control method of the ninth dual-port NVMe controller according to the second aspect of the present application, processing the first NVMe command through a first host command processing branch, including: acquiring and analyzing the first NVMe command to obtain a corresponding SGL and/or PRP, generating one or more first DMA commands according to the SGL and/or PRP, and storing the one or more first DMA commands in a shared memory; according to the first DMA command index, acquiring at least one corresponding first DMA command from the shared memory, and moving data to the first host according to the acquired at least one DMA command; processing the second NVMe command through a second host command processing branch, including: acquiring and analyzing the second NVMe command to obtain a corresponding SGL and/or PRP, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in a shared memory; and acquiring at least one corresponding second DMA command from the shared memory according to the second DMA command index, and moving the data to the second host according to the acquired at least one second DMA command.
According to the control method of the ninth dual-port NVMe controller of the second aspect of the present application, there is provided the control method of the tenth dual-port NVMe controller according to the second aspect of the present application, further comprising: requesting, by a first read initiation circuit in a first host command processing branch, a back-end module to move data indicated by one or more first DMA commands corresponding to the first NVMe commands from NVM to a storage device memory; and transmitting a first DMA command index in response to the data of the at least one first DMA command being moved into the storage device memory; requesting, by a second read initiation circuit in a second host command processing branch, a back-end module to move data indicated by one or more second DMA commands corresponding to the second NVMe commands from the NVM to the storage device memory; and transmitting the second DMA command index in response to the data of the at least one second DMA command being moved into the storage device memory.
According to a control method of a tenth dual port NVMe controller of the second aspect of the present application, there is provided a control method of an eleventh dual port NVMe controller according to the second aspect of the present application, the first read initiation circuit and the second read initiation circuit are the same circuit or two different circuits.
According to a control method of an eleventh dual-port NVMe controller of the second aspect of the present application, there is provided a control method of a twelfth dual-port NVMe controller of the second aspect of the present application, when one of the first host interface and the second host interface is connected to a host, the first DMA transfer circuit or the second DMA transfer circuit is controlled in common by the first read initiation circuit and the second read initiation circuit.
According to a control method of a tenth dual port NVMe controller of the second aspect of the present application, there is provided a control method of a thirteenth dual port NVMe controller according to the second aspect of the present application, the first read initiation circuit and the second read initiation circuit each employ a CPU.
According to a control method of a ninth dual-port NVMe controller in a second aspect of the present application, a control method of a fourteenth dual-port NVMe controller in the second aspect of the present application is provided, and whether the corresponding DMA transfer circuit is performing data transfer is determined in response to a notification that the first host interface or the second host interface is closed, and if there is data transfer, the control bus is restarted or discards the current data transfer.
According to a third aspect of the present application, there is provided a storage device comprising any one of the first to fourteenth NVMe controllers of the first aspect of the present application.
According to a fourth aspect of the present application, there is provided an electronic device comprising any one of the first to fourteenth NVMe controllers of the first aspect of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1A is a block diagram of a prior art solid state storage device;
FIG. 1B is a schematic diagram of a control unit of the prior art;
FIG. 1C is a schematic diagram of a prior art host command processing unit;
FIG. 1D is another block diagram of a memory device;
FIG. 2A is a schematic diagram of a dual port mode application scenario;
FIG. 2B is a block diagram of a dual port NVMe controller;
FIG. 3 is a block diagram of a dual port NVMe controller according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating the operation of a dual port NVMe controller for processing write commands according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating the operation of a dual port NVMe controller for processing read commands according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating the operation of another dual port NVMe controller for processing read commands according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating the operation of a dual port NVMe controller for processing read commands according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating the operation of a dual port NVMe controller for processing read commands according to an embodiment of the present application;
FIG. 9 is a flow chart of a control method of a dual port NVMe controller according to an embodiment of the present application;
FIG. 10 is an exploded flow chart (for write commands) of step S102 in FIG. 9;
FIG. 11 is another exploded flow chart (for a read command) of step S102 in FIG. 9;
fig. 12 is a further exploded flow chart (for a read command) of step S102 in fig. 9.
Detailed Description
A dual port NVMe controller is shown in fig. 3. For example, when two hosts are connected to the storage device, both hosts send NVMe commands to the storage device or one of the two hosts sends NVMe commands to the storage device, the NVMe controller shown in fig. 3 may be applied to perform processing on the NVMe commands sent by the hosts.
As shown in fig. 3, the NVMe controller includes a first host interface, a second host interface, a first host command processing tributary, a second host command processing tributary, and a shared memory. In addition, the NVMe controller also comprises a storage command processing unit and a medium interface controller. The first host interface is connected with the first host and is used for receiving a first NVMe command sent by the first host, and the second host interface is connected with the second host and is used for receiving a second NVMe command sent by the second host. The first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; the shared memory is used for storing the first NVMe command and the second NVMe command. For clarity purposes, the first host command processing leg and the second host command processing leg are collectively referred to as a host command processing unit.
In one embodiment, the first host interface, the first host command processing branch, is hardware dedicated to the first host; the second host interface and the second host command processing branch are hardware special for the second host. The shared memory is a device for storing NVMe commands or information or commands related to the NVMe commands. As an example, to save hardware resources, a shared memory may be provided in the storage device, which may be shared by the first host command processing branch and the second host command processing branch.
In another embodiment, to avoid the first host command processing branch and the second host command processing branch from preempting the shared memory resources, two shared memories may be used in the NVMe controller provided in the present application, for example, a first shared memory is allocated for the first host interface and the first host command processing branch, and a second shared memory is allocated for the second host interface and the second host command processing branch; the first shared memory is dedicated to storing the first NVMe command related information sent by the first host, and the second shared memory is dedicated to storing the second NVMe command related information sent by the second host.
In a typical application scenario, a dual-port NVMe controller serves two hosts, a first host and a second host as shown in fig. 3. In other application scenarios, it is also possible for the dual-port NVMe controller to serve a single host, e.g., one of the two hosts is disconnected from the storage device, or no NVMe commands are sent to the storage device, i.e., no information interaction between the host and the storage device. For example, the dual-port NVMe controller shown in fig. 3 serves only the first host, and the dual-port NVMe controller generates a notification that the second host is turned off.
For example, the notification of the second host shutdown may be generated by detecting a PCIe interface signal corresponding to the second host to identify that there is no information interaction between the second host and the storage device. A closing control unit (not shown in the figure, the closing control unit may be implemented by a CPU or hardware independent of the CPU) inside the dual-port NVMe controller responds to the notification of closing of the second host, and closes the dedicated hardware of the second host, where the dedicated hardware of the second host includes a second host interface and a second host command processing branch; or comprises a second host interface, a second host command processing branch, and a second shared memory. In one embodiment, the shutdown control unit may further release the storage space occupied by the first NVMe command related information stored in the first shared memory in response to the notification of the shutdown of the first host interface, or release the storage space occupied by the second NVMe command related information in the second shared memory in response to the notification of the shutdown of the second host interface.
In yet another embodiment, the shutdown control unit may further release the storage space occupied by the first NVMe command in the shared memory in response to the notification of the shutdown of the first host interface, or release the storage space occupied by the second NVMe command in the shared memory in response to the notification of the shutdown of the second host interface.
When the host sends an IO command to the storage device, the IO command may be sent based on an NVMe protocol, and the IO command sent based on the NVMe protocol may also be generally referred to as an NVMe command, for example, the NVMe command includes a read command or a write command, and for the write command, it is used to instruct data to be moved from the host to the storage device, and moving data from the host to the storage device further includes two steps: data is moved from host memory to DRAM through the DMA transfer circuitry and from DRAM to NVM through the back-end module (media interface controller 1044 shown in FIG. 1B).
By way of example, fig. 4 shows a schematic diagram of the operation of a dual port NVMe controller to process write commands.
As shown in fig. 4, the first host command processing branch includes a first SGL/PRP unit, a first write initiation circuit, and a first DMA transfer circuit. The second host command processing branch includes a second SGL/PRP unit, second write initiator circuitry, and second DMA transfer circuitry. By way of example, the first host command processing leg is similar in structure to the second host command processing leg, and the first host command processing leg processes the first NVMe process similar to the second host command processing leg processes the second NVMe process. In view of this, the first host command processing tributary structure, the functions of its respective components, and the process of processing the first NVMe (write command) will be mainly described, and the second host command processing tributary structure, the functions of its respective components, and the process of processing the second NVMe (write command) will be briefly described. It should be noted that, the first NVMe command/first write command/first read command, the second NVMe command/second write command/second read command referred to hereinafter are used to identify the host to which the NVMe command/write command/read command corresponds, and the first and second are not the number of NVMe commands. Similarly, the first DMA command and the second DMA command are used for identifying host processing branches corresponding to the DMA commands, and the first DMA command and the second DMA command do not refer to the number of the DMA commands.
As shown in fig. 4, the first SGL/PRP unit acquires an SGL or PRP corresponding to the first NVMe command in response to the received first NVMe command, generates one or more first DMA commands according to the SGL or PRP, and stores the one or more first DMA commands in the shared memory.
By way of example, the first SGL/PRP unit may be implemented with a CPU or may be implemented with hardware circuitry that is independent of the CPU. Regarding how to extract SGL and PRP from NVMe commands and generate DMA commands from SGL and PRP, prior art means in the prior art can be adopted, so that only a brief description is given here.
The NVMe command includes a PRP field or SGL field, which may be an SGL or PRP itself, pointing to a host memory address space to be accessed, or may be a pointer, pointing to an SGL or PRP linked list. Based on this, in one embodiment, the NVMe command carries an SGL or PRP, and the first SGL and/or PRP unit may directly acquire the SGL or PRP in response to receiving the NVMe command. In another embodiment, the NVMe command carries an SGL or PRP pointer, and the first SGL and/or PRP unit accesses the host according to the SGL or PRP pointer in response to receiving the NVMe command, and obtains the SGL or PRP from the host.
In an application scenario, the first SGL/PRP unit may include an SGL unit and a PRP unit, where the SGL unit processes SGL-related NVMe commands, and the PRP unit processes PRP-related NVMe commands, that is, the first SGL/PRP unit may process both SGL-related NVMe commands and PRP-related NVMe commands. In another application scenario, the first SGL/PRP unit may include an SGL unit or a PRP unit, that is, the first SGL/PRP unit may process SGL-related NVMe commands or process PRP-related NVMe commands. The structure of the first SGL/PRP unit is not limited in this application.
As shown in fig. 4, a first host transmits a first write command to a storage device through a first host interface, which transmits the first write command to a shared memory for storage, which is denoted as process (1). The CPU or a hardware circuit independent of the CPU fetches the PRP/SGL field in the first write command from the shared memory and provides the first write command to the first SGL/PRP element, the process is denoted as process (2). If the first write command carries an SGL, the first SGL/PRP unit caches the SGL in the first SGL/PRP cache unit, and if the first write command carries an SGL pointer, the SGL is acquired from the first host through the first host interface and cached in the first SGL/PRP cache unit, wherein the process is represented as a process (3); next, the first SGL/PRP unit generates one or more first DMA commands from the SGL, and stores the one or more first DMA commands in the shared memory, which is denoted as procedure (4).
After the DMA command generation is completed, the first SGL/PRP unit notifies the first write initiator circuit, indicated as process (5), to pass a first DMA command index, e.g., a DMA command pointer, indicating the location of the DMA command in the shared memory to the first write initiator circuit. The first DMA transfer circuit receives the first DMA command index and fetches one or more first DMA commands from the shared memory according to the first DMA command index, the process being indicated as process (7-1), the first DMA transfer circuit performs a data move operation, and transfers data from the first host to the memory of the storage device, the process being indicated as process (7-2).
When the data movement indicated by one of the first DMA commands is completed, a notification of the completion of the data movement is generated, which is denoted as procedure (8). In process (5), the first write initiate circuit obtains the first write command ID in addition to the first DMA command index. Therefore, after a certain first DMA command is processed, corresponding information (for example, a first write command ID to which a certain first DMA command belongs) is fed back to the first write initiator circuit, so that the first write initiator circuit can identify which first DMA command corresponds to the first write command. For example, if a first write command contains 3 first DMA commands, it is denoted as 1#,2# and 3#. When all of # 1, # 2 and # 3 are processed, the first write initiator circuit is notified accordingly. The first write control circuit can determine that all of the 3 first DMA commands corresponding to the first write command are processed according to the first write command ID, and generate a notification of completion of execution of the first write command, which is denoted as procedure (9). According to the NVMe protocol, the notification may be achieved by operating the CQ queue. The shared memory may release the first write command and the space in the shared memory of the first DMA command (e.g., 1#,2# and 3 #) corresponding to the first write command while notifying the first host.
In the embodiment shown in fig. 4, after a certain first DMA command is stored, the first write initiator circuit can learn that the first DMA command is stored, and at this time, the first SGL/PRP unit informs the first write initiator circuit that a new first DMA command is written into the shared memory. In other embodiments, the first write initiate circuit may also be notified by other circuitry by detecting whether there is a first DMA command pending in the shared memory. In addition, the first SGL/PRP caching unit is configured to cache SGL or PRP, and in some embodiments, the first SGL/PRP caching unit may be omitted according to the processing speed of the SGL/PRP unit. In addition, the first SGL/PRP unit, the first write initiating circuit and the first DMA transmission circuit are all realized by hardware circuits independent of the CPU, so that the cost of the CPU can be reduced.
Regarding the second host and the second host command processing branch corresponding to the second host interface, the second SGL/PRP unit, the second write initiator circuit, and the second DMA transfer circuit are similar to the first host command processing branch in structure and processing procedure, and therefore will not be repeated. It should be noted that the second host command processing branch is independent from the first host command processing branch, that is, the first host command processing branch is dedicated to processing the first write command from the first host, the second host command processing branch is dedicated to processing the second write command sent by the second host, the first host command processing branch does not process the second write command, and the second host command processing branch does not process the first write command.
In addition, the first host command processing branch circuit and the second host command processing branch circuit share a CPU and a back-end module, the CPU is connected with the first DMA transmission circuit and the second DMA transmission circuit, and is used for controlling the back-end module to write data into the NVM after the first DMA transmission circuit moves the data indicated by the first DMA command corresponding to the first write command to the DRAM; or after the second DMA transfer circuit moves the data indicated by the second DMA command corresponding to the second write command to the DRAM, controlling the back-end module to write the data to the NVM. The design of the shared CPU and the back-end module can save hardware resources. Further, for the first write command and the second write command, since the CPU can perform the scheduling and processing of the corresponding storage command at a fixed data granularity (e.g., 4 KB), the CPU does not need to care about which host the processed data belongs to, for example, even if a certain first write command from the first host is large, since the CPU does not process the second DMA command corresponding to the second write command after processing the first DMA command corresponding to the first write command, but, for example, randomly acquires the DMA command from the shared memory at a fixed data granularity to process, the second write command from the second host is not blocked for a long time, thereby reducing the latency of processing the NVMe command.
Further, the first and second DMA transfer circuits are coupled with the DRAM through a bus (e.g. the bus shown in FIG. 1D). When the first host interface is turned off, the above-mentioned shutdown control unit is further configured to determine whether the first DMA transfer circuit is performing data transfer, and if there is data transfer, control the bus to restart or discard the current data transfer. Similarly, when the second host interface is turned off, the turn-off control unit is further configured to determine whether the second DMA transfer circuit is performing data transfer, and if there is data transfer, control the bus to restart or discard the current data transfer.
Having described the circuitry and principles of the dual port NVMe controller to process write commands, the circuitry and principles of the dual port NVMe controller to process read commands will be described below. Wherein the read command is used to indicate data movement from the storage device to the host, the data movement from the storage device to the host also comprising two steps: the data is read from the NVM to the DRAM through the back-end module and moved from the storage device memory to the host memory through the DMA transfer circuit.
FIG. 5 illustrates a schematic diagram of the operation of the dual-port NVMe controller for processing read commands, where the first host command processing leg of FIG. 5 includes a first host interface, a first SGL/PRP unit, and a first DMA transfer circuit, and the second host command processing leg includes a second host interface, a second SGL/PRP unit, and a second DMA transfer circuit. The structure of the above-described circuit is similar to that shown in fig. 4, and thus will not be described in detail. By way of example, the first host command processing leg and the second host command processing leg may share a read control circuit based on the characteristics of the read command. The following describes a procedure for processing a read command sent by a first host with a first host command processing branch.
As shown in FIG. 5, a first host transmits a first read command to a storage device through a first host interface, which transmits the first read command to a shared memory for storage, which is denoted as process (1). The PRP/SGL field in the read command is extracted and the read command is provided to the first SGL/PRP unit, which process is denoted as process (2). If the first read command carries an SGL, the SGL is cached in a first SGL/PRP caching unit, and if the first read command carries an SGL pointer, the SGL is acquired from a first host through a host interface and cached in the first SGL/PRP caching unit, wherein the process is expressed as a process (3); next, one or more first DMA commands are generated from the SGL, and the first DMA commands are stored in the shared memory, which is denoted as process (4). After the first DMA command is generated, the first SGL/PRP unit notifies the read initiate circuit, indicated as process (5), to pass a first DMA command index, e.g., a DMA command pointer, indicating the location of the first DMA command in the shared memory to the read initiate circuit.
The read initiate circuit receives the first DMA command index. At the same time, the read initiate circuit accesses the back-end module requesting the back-end module to read data indicated by the first DMA command from the NVM into a storage device memory (DRAM), which is denoted as process (6). The read initiator circuit waits for the back-end module to read the data indicated by the first DMA command into the memory (DRAM) of the storage device, and when the data indicated by one or more first DMA commands are read into the DRAM, the read initiator circuit can learn the information, for example, the back-end module can notify the read initiator circuit or the read initiator circuit learn the information according to the state of the memory of the storage device, and the process is represented as a process (7). The read initiate circuit may provide the first DMA command index to the first DMA transfer circuit in response to the read out of data indicated by one or more first DMA commands into the DRAM, which is denoted as process (8). The first DMA transfer circuit fetches the corresponding one or more first DMA commands from the shared memory according to the first DMA command index, denoted as process (9), while the first DMA transfer circuit performs a data move operation to move data from the DRAM to the first host memory, denoted as process (10).
When the data movement indicated by one of the DMA commands is completed, a notification of the completion of the data movement is generated, which is denoted as procedure (11). In process (5), the read initiate circuit obtains a first read command ID in addition to the first DMA command index for identifying the read command. In one embodiment, after a certain first DMA command is processed, corresponding information (e.g., a read command ID to which a certain first DMA command belongs) is fed back to the read initiator circuit, so that the read initiator circuit can identify which first DMA command corresponds to. When it is judged that all of the first DMA commands corresponding to a certain first read command are processed, a notification of completion of execution of the first read command is generated to notify the first host, which is denoted as a process (12). The shared memory may release the first read command and the space in the shared memory of the first DMA command corresponding to the first read command while notifying the first host.
In the embodiment shown in fig. 5, after the DMA command is stored, the read initiator circuit can learn that a certain DMA command is stored, and at this time, the first SGL/PRP unit notifies the read initiator circuit that a new DMA command is written into the shared memory. In other embodiments, the read initiate circuit may also be notified by other circuitry by way of detecting the data storage state in the shared memory.
Similarly, the second SGL/PRP unit, after generating the second DMA command from the second read command from the second host interface and storing in the shared memory, also notifies the read initiate circuit to provide the second DMA command index to the read initiate circuit, which is denoted as process (5'). The read initiate circuit receives the second DMA command index. At the same time, the read initiate circuit accesses the back-end module requesting the back-end module to read data indicated by the second DMA command from the NVM into a storage device memory (DRAM), which is denoted as process (6'). The read initiate circuit waits for the back-end module to read data indicated by the second DMA commands into a storage device memory (DRAM), which information can be learned by the read initiate circuit when data indicated by one or more of the second DMA commands is read into the DRAM, which process is denoted as process (7'). The read initiate circuit reads out data indicated by one or more second DMA commands into the DRAM in response to the second DMA command index as described above, and provides the second DMA transfer circuit with the process indicated as process (8'). The second DMA transfer circuit fetches the corresponding one or more second DMA commands from the shared memory according to the second DMA command index, the process being represented as process (9 '), while the second DMA transfer circuit performs a data move operation to move data from the storage device memory to the second host memory, the process being represented as process (10').
When the data transfer indicated by one of the second DMA commands is completed, a notification of the completion of the data transfer is generated, which process is denoted as process (11'). When the read control circuit judges that all DMA commands corresponding to the second read command are processed, a notification of completion of execution of the second read command is generated to notify the second host, the process being represented as a process (12'). The shared memory may release the space in the shared memory of the second read command and the second DMA command corresponding to the second read command while notifying the second host.
The first host command processing leg and the second host command processing leg of the embodiment shown in fig. 5 share a read initiation circuit. The read initiate circuit may be implemented by hardware independent of the CPU or may be implemented by the CPU. Preferably, the read initiate circuit is implemented with a CPU.
As an example, the first host command processing leg and the second host command processing leg may not share a read initiate circuit, e.g., the first host command processing leg includes a first read initiate circuit and the second host command processing leg includes a second read initiate circuit.
FIG. 6 illustrates a schematic circuit diagram of a first host command processing branch and a second host command processing branch using a first read initiate circuit and a second read initiate circuit, respectively. As shown in fig. 6, the first read initiate circuit is capable of interacting with the back-end module, including the first read initiate circuit accessing the back-end module, requesting the back-end module to process data indicated by a first DMA command corresponding to the first read command, denoted as process (6), and notifying the first read initiate circuit after the back-end module reads the data out of the DRAM, denoted as process (7). The second read initiate circuit is also capable of interacting with the back-end module, including the second read initiate circuit accessing the back-end module, requesting the back-end module to process data indicated by a second DMA command corresponding to the second read command, denoted as process (6 '), and notifying the second read initiate circuit (7') after the back-end module reads the data out of the DRAM. It should be noted that the first back-end module is selective in notifying the first read initiate circuit and the second read initiate circuit. That is, the back-end module notifies the first read initiation circuit when the data indicated by the first DMA command is read out, and notifies the second read initiation circuit when the data indicated by the second DMA command is read out. And secondly, the first host command processing branch circuit and the second host command processing branch circuit are mutually independent, and when a certain host interface is closed, the hardware corresponding to the host interface is closed, so that the corresponding storage space is released. In the embodiment shown in fig. 6, the data processing task can be well completed under the condition that the first host and the second host are both online. However, in the practical application scenario, there is no information interaction between a certain host and the storage device, that is, a situation that a certain host interface is closed is also frequent, that is, the dual-port NVMe controller also works in a single-port mode. In single port mode, the corresponding hardware may be turned off in the manner shown in fig. 6.
In the single port mode, in order to improve the processing efficiency of the NVMe controller and avoid the waste of hardware resources, for example, in the single port mode, a part of hardware resources of the corresponding host command processing branch may be turned off, and another part of hardware resources may be combined with another host command processing branch to process the NVMe command together. For example, when the first host is disconnected, the first host interface in the first host command processing branch corresponding to the first host is closed, and the first read initiating circuit in the first host command processing branch and the second read initiating circuit in the second host command processing branch simultaneously serve DMA command processing corresponding to the second NVMe command.
Fig. 7 illustrates a schematic operation of still another NVMe controller provided in an embodiment of the present application. The NVMe controller shown in fig. 7 is different from the NVMe controller shown in fig. 6 in that the second read initiation circuit in the second host command processing leg is coupled not only to the second host command processing leg but also to the first SGL/PRP unit in the first host command processing leg (denoted as (a)) and to the first DMA transfer circuit (denoted as (b)). In the case that the first host and the second host are both online, the operation process of the NVMe controller shown in fig. 7 is the same as that of fig. 6, and a detailed description thereof is omitted. While the first host is online, the NVMe controller shown in fig. 7 operates differently from fig. 6, and is specifically described as follows:
When the second host interface is turned off, the second host interface and the second SGL/PRP unit may be turned off, but the second read initiator circuit still operates, and due to the coupling relationships ((a) and (b)) described above, the second read initiator circuit and the first read initiator circuit may both serve to process the first DMA command corresponding to the first NVMe command at the same time, including acquiring the first DMA command index in response to a notification that the first SGL/PRP unit stores the first DMA command is completed, and after the back-end module reads the data indicated by the first DMA command to the DRAM, initiating the first DMA command index to the first DMA transfer circuit, and performing data movement from the DRAM to the host by the first DMA transfer circuit. . When the dual-port NVMe controller is in the single-port mode, the response efficiency to the first NVMe command can be improved by adopting two read initiation circuits.
Fig. 8 illustrates an operation schematic diagram of still another NVMe controller provided in an embodiment of the present application. As shown in fig. 8, the first read initiation circuit in the first host command processing leg is coupled not only to the first SGL/PRP unit and the first DMA transfer circuit in the first host command processing leg, but also to the second SGL/PRP unit in the second host command processing leg (denoted as (c)), and to the second DMA transfer circuit (denoted as (d)). By way of example, in the dual port mode, the NVMe controller shown in fig. 8 operates similarly to the NVMe controller shown in fig. 6 and 7 described above. In the case that the first host is disconnected and the second host is connected online, the NVMe controller shown in fig. 8 is different from the NVMe controllers shown in fig. 6 and 7 in the working process. Specifically, the working process is as follows:
The first host interface and the first SGL/PRP unit are turned off, and the second read initiation circuit and the first read initiation circuit may both serve to process a second DMA command corresponding to the second NVMe command at the same time, including acquiring a second DMA command index in response to a notification that the second SGL/PRP unit stores the second DMA command is completed, and after the back-end module reads data indicated by the second DMA command to the DRAM, sending the second DMA command index to the second DMA transfer circuit, and performing data movement from the DRAM to the host by the second DMA transfer circuit. When the dual-port NVMe controller is in the single-port mode, the response efficiency to the second NVMe command can be improved by adopting two read initiation circuits.
In addition, in addition to the embodiments of fig. 7 and 8, in one embodiment, the dual-port NVMe controller may also have the coupling relationships ((a), (b)) shown in fig. 7 and the coupling relationships ((c), (d)) shown in fig. 8 at the same time. In this embodiment, the first read initiation circuit of the first host command processing leg may be for the second host command processing leg and the second read initiation circuit of the second host command processing leg may be for the first host command processing leg, the first host interface and the second host interface being equivalent.
According to one aspect of the application, the application further provides a control method of the dual-port NVMe controller. Fig. 9 shows a control method flow. Comprising steps S101, S102 and S103. It should be noted that the sequence of the sequence numbers in steps S101, S102 and S103 does not represent the sequence in which the steps are executed, and the steps may be executed in a changed order or simultaneously.
The method as shown in fig. 9 includes: step S101, a first host is connected through a first host interface, and a second host is connected through a second host interface. Step S102, the first NVMe command is processed by the first host command processing branch, and the second NVMe command is processed by the second host command processing branch. Step S103, storing the first NVMe command and the second NVMe command through at least one shared memory.
According to the method, the NVMe commands from the first host and the second host can be processed through the two independent host command processing branches, so that preemption of resources can be reduced. Furthermore, the energy consumption can be reduced by closing the hardware corresponding to a certain host interface when the host interface is closed.
Several different embodiments of step S102 are illustrated in fig. 10, 11 and 12.
The method of fig. 10 comprises: step S201 is executed, in response to the received first NVMe command, the SGL and/or PRP corresponding to the first NVMe command is acquired, one or more first DMA commands are generated according to the SGL and/or PRP, and the one or more first DMA commands are stored in the shared memory. In response to receiving the second NVMe command, an SGL and/or a PRP corresponding to the second NVMe command is acquired, one or more second DMA commands are generated according to the SGL and/or the PRP, and the one or more second DMA commands are stored in the shared memory.
Step S202 is then executed, and a first DMA command index is sent in response to completion of storage of one or more first DMA commands corresponding to one first NVMe command; and transmitting a second DMA command index in response to one or more second DMA command storage completions corresponding to the one second NVMe command.
Finally, step S203 is executed to obtain one or more first DMA commands from the shared memory according to the first DMA command index, and to move data from the first host according to the obtained one or more first DMA commands. One or more second DMA commands are fetched from the shared memory according to the second DMA command index, and data is moved from the second host according to the fetched one or more second DMA commands.
The method shown in fig. 10 is applicable to the case where the NVMe command is a write command, and can move the data indicated by the first NVMe command to the memory (e.g., DRAM) of the storage device.
The method of fig. 11 includes: step S301 is executed, in response to the first NVMe command, the first NVMe command is acquired and parsed to obtain a corresponding SGL and/or PRP, one or more first DMA commands are generated according to the SGL and/or PRP, and the one or more first DMA commands are stored in the shared memory; in response to the second NVMe commands, the second NVMe commands are acquired and parsed to obtain corresponding SGLs and/or PRPs, one or more second DMA commands are generated from the SGLs and/or PRPs, and the one or more second DMA commands are stored in the shared memory.
Then, step S302 is executed, where the read initiator requests the back-end module to move data indicated by one or more first DMA commands corresponding to the first NVMe commands from the NVM to a storage device memory (DRAM). The read initiate circuit requests the back-end module to move data indicated by one or more second DMA commands corresponding to the second NVMe commands from the NVM to a storage device memory (DRAM).
Finally, step S303 is executed, where in response to the data of at least one first DMA command being moved into the memory of the storage device, a first DMA command index is sent, and the first DMA transfer circuit obtains and processes the first DMA command according to the first DMA command index. In response to the data of the at least one second DMA command being moved into the storage device memory, a second DMA command index is sent, and the second DMA transfer circuit retrieves and processes the second DMA command in accordance with the second DMA command index.
The method of fig. 12 includes steps S401, S402 and S403, which are different from the method of fig. 11 in that in step S402, the back-end modules are respectively requested by mutually independent read initiator circuits, and in step S403, the mutually independent read initiator circuits respectively respond to the data indicated by the first DMA command and move to the DRAM, and instruct the first DMA transfer circuit to process the first DMA command; and instructing the second DMA transfer circuit to process the second DMA command in response to the data being moved to the DRAM as instructed by the second DMA command.
In summary, the methods of fig. 11 and 12 are applicable to the case where the NVMe command is a read command, and in the method of fig. 11, one read initiator circuit may be used to interact with the back-end module, the first DMA transfer circuit, and the second DMA transfer circuit, whereas in fig. 12, for example, the first read initiator circuit may be used to interact with the back-end module, the first DMA transfer circuit, and the second read initiator circuit may be used to interact with the back-end module, and the second DMA transfer circuit. The related method of the present embodiment can be understood with reference to fig. 5, 6, 7, 8.
According to an aspect of the present application, the embodiment of the present application further provides a memory device, which refers to the memory device 102 shown in fig. 1A and 1B, where the memory device 102 includes an interface 103, a control unit 104, one or more NVM chips 105, and a DRAM 110. The control component 104 includes the NVMe controller described in the above embodiments, and since the NVMe controller has been described in detail above, it will not be described in detail herein.
According to an aspect of the present application, an embodiment of the present application further provides an electronic device, where the electronic device includes a processor and a storage device, and the storage device is the storage device mentioned in the above embodiment. Since the above has been described in detail, it will not be described in detail.
It should be noted that, for the sake of brevity, some methods and embodiments thereof are described in the present application as a series of actions and combinations thereof, but those skilled in the art will understand that the aspects of the present application are not limited by the order of the described actions. Thus, one of ordinary skill in the art will appreciate in light of the present disclosure or teachings that certain steps thereof may be performed in other sequences or concurrently. Further, those skilled in the art will appreciate that the embodiments described herein may be considered alternative embodiments, i.e., wherein the acts or modules involved are not necessarily required for the implementation of some or all aspects of the present application. In addition, the description of some embodiments of the present application also has an emphasis on each of them according to the scheme. In view of this, those skilled in the art will appreciate that portions of one embodiment of the present application that are not described in detail herein may also be referred to in connection with other embodiments.
In particular implementations, based on the disclosure and teachings of the present application, one of ordinary skill in the art will appreciate that several embodiments disclosed herein may also be implemented in other ways not disclosed herein. For example, in terms of the foregoing embodiments of the electronic device or apparatus, the units are split in consideration of the logic function, and there may be another splitting manner when actually implemented. For another example, multiple units or components may be combined or integrated into another system, or some features or functions in the units or components may be selectively disabled. In terms of the connection relationship between different units or components, the connections discussed above in connection with the figures may be direct or indirect couplings between the units or components. In some scenarios, the foregoing direct or indirect coupling involves a communication connection utilizing an interface, where the communication interface may support electrical, optical, acoustical, magnetic, or other forms of signal transmission.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A dual port NVMe controller, comprising:
the first host interface and the second host interface are respectively connected with the first host and the second host and are respectively used for receiving a first NVMe command sent by the first host and a second NVMe command sent by the second host, wherein the first NVMe command and the second NVMe command are commands conforming to an NVMe protocol;
the host command processing unit comprises a first host command processing branch and a second host command processing branch; the first host command processing branch is used for processing a first NVMe command received from the first host interface; the second host command processing branch is used for processing a second NVMe command received from the second host interface; the first host command processing branch generates a first storage command according to a first NVMe command; and the second host command processing branch generates a second storage command according to the second NVMe command; and
at least one shared memory for storing the first and second NVMe commands.
2. The NVMe controller of claim 1, wherein the NVMe controller is configured to,
the shared memory includes a first shared memory:
The first shared memory is connected with the first host command processing branch and the second host command processing branch and is used for storing the first NVMe command and the second NVMe command.
3. The NVMe controller according to claim 1 or 2, further comprising a closing control unit for:
closing the first host interface and the first host command processing branch in response to the notification of the closing of the first host interface; and
in response to the notification of the second host interface closing, the second host interface and the second host command processing branch are closed.
4. The NVMe controller of claim 3, wherein the shutdown control unit is further to:
releasing the storage space occupied by the first NVMe command in the shared memory in response to the notification of the closing of the first host interface; and
and responding to the notification of the closing of the second host interface, and releasing the storage space occupied by the second NVMe command in the shared memory.
5. The NVMe controller of any one of claims 1-4,
the first host command processing branch includes a first SGL and/or PRP unit, a first write initiation circuit and a first DMA transfer circuit:
the first SGL and/or PRP unit responds to the received first NVMe command, acquires SGL and/or PRP corresponding to the first NVMe command, generates one or more first DMA commands according to the SGL and/or PRP, and stores the one or more first DMA commands in a shared memory;
The first write initiating circuit responds to the completion of one or more first DMA command storage corresponding to a first NVMe command and sends a first DMA command index to the first DMA transmission circuit;
the first DMA transmission circuit acquires the one or more first DMA commands from the shared memory according to the first DMA command index, and moves data from the first host according to the acquired one or more first DMA commands.
6. The NVMe controller of any one of claims 1-5,
the second host command processing leg includes a second SGL and/or PRP unit, a second write initiation circuit and a second DMA transfer circuit:
the second SGL and/or PRP unit is used for responding to the received second NVMe command, acquiring SGL and/or PRP corresponding to the second NVMe command, generating one or more second DMA commands according to the SGL and/or PRP, and storing the one or more second DMA commands in a shared memory;
the second write initiating circuit responds to the completion of one or more second DMA command storage corresponding to a second NVMe command and sends a second DMA command index to the second DMA transmission circuit;
the second DMA transmission circuit acquires the one or more second DMA commands from the shared memory according to the second DMA command index, and moves data from the second host according to the acquired one or more second DMA commands.
7. The NVMe controller of any one of claims 1-6,
the first host command processing branch comprises a first SGL and/or PRP unit and a first DMA transmission circuit; the second host command processing branch comprises a second SGL and/or PRP unit and a second DMA transfer circuit; further comprising at least one read initiate circuit;
the first SGL and/or PRP unit is configured to obtain and parse the first NVMe command to obtain a corresponding SGL and/or PRP, generate one or more first DMA commands according to the SGL and/or PRP, and store the one or more first DMA commands in a shared memory;
the second SGL and/or PRP unit is configured to obtain and parse the second NVMe command to obtain a corresponding SGL and/or PRP, generate one or more second DMA commands according to the SGL and/or PRP, and store the one or more second DMA commands in a shared memory;
the read initiate circuit requesting the back-end module to move data indicated by the one or more first DMA commands or the one or more second DMA commands from the NVM to the storage device memory; and providing the first DMA command index to the first DMA transfer circuit or the second DMA command index to the second DMA transfer circuit in response to data of at least one first DMA command or at least one second DMA command being moved into storage device memory;
The first DMA transmission circuit acquires at least one corresponding first DMA command from the shared memory according to the first DMA command index received from the reading initiating circuit, and moves data to the first host according to the acquired at least one first DMA command;
the second DMA transmission circuit acquires at least one corresponding second DMA command from the shared memory according to the second DMA command index received from the reading initiating circuit, and moves data to a second host according to the acquired at least one second DMA command.
8. The NVMe controller of claim 7,
the read initiate circuit includes a first read initiate circuit and a second read initiate circuit:
the first read initiating circuit is used for requesting the back-end module to move the data indicated by one or more first DMA commands corresponding to the first NVMe commands from the NVM to the memory of the storage device; and providing the first DMA command index to the first DMA transfer circuit in response to at least one first DMA command data being moved into a storage device memory;
the second read initiating circuit is configured to request the back-end module to move data indicated by one or more second DMA commands corresponding to the second NVMe command from the NVM to the memory of the storage device; and providing the second DMA command index to the second DMA transfer circuit in response to at least one second DMA command data being moved into storage device memory.
9. The NVMe controller of claim 7 or 8, characterized in that,
when one of the first host interface and the second host interface is connected with the host, the first read initiating circuit and the second read initiating circuit jointly control the first DMA transmission circuit or the second DMA transmission circuit.
10. The control method of the dual-port NVMe controller is used for connecting two hosts, and is characterized by comprising the following steps:
processing a first NVMe command from a first host through a first host interface and a first host command processing branch, wherein the first NVMe command and the second NVMe command are commands conforming to NVMe protocol;
processing a second NVMe command from a second host through a second host interface and a second host command processing branch, wherein the first host command processing branch generates a first storage command according to the first NVMe command; and the second host command processing branch generates a second storage command according to the second NVMe command; and
the first and second NVMe commands are stored through at least one shared memory.
CN202110751453.8A 2021-07-02 2021-07-02 Dual-port NVMe controller and control method Active CN113468083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110751453.8A CN113468083B (en) 2021-07-02 2021-07-02 Dual-port NVMe controller and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110751453.8A CN113468083B (en) 2021-07-02 2021-07-02 Dual-port NVMe controller and control method

Publications (2)

Publication Number Publication Date
CN113468083A CN113468083A (en) 2021-10-01
CN113468083B true CN113468083B (en) 2024-01-26

Family

ID=77877627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110751453.8A Active CN113468083B (en) 2021-07-02 2021-07-02 Dual-port NVMe controller and control method

Country Status (1)

Country Link
CN (1) CN113468083B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI802268B (en) * 2022-02-14 2023-05-11 神雲科技股份有限公司 Server system
CN115033186B (en) * 2022-08-09 2022-11-01 北京得瑞领新科技有限公司 Dual-port NVMe controller and read-write command processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688516A (en) * 2016-08-04 2018-02-13 三星电子株式会社 Storage device, test system and method for testing the storage device
CN110377221A (en) * 2018-04-13 2019-10-25 北京忆恒创源科技有限公司 Dual-port solid storage device and its data processing method
CN110618903A (en) * 2018-06-19 2019-12-27 北京忆恒创源科技有限公司 Electronic equipment testing method and device
CN112764669A (en) * 2019-11-01 2021-05-07 北京忆芯科技有限公司 Accelerator for a storage controller

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180335971A1 (en) * 2017-05-16 2018-11-22 Cisco Technology, Inc. Configurable virtualized non-volatile memory express storage
KR102599188B1 (en) * 2018-11-09 2023-11-08 삼성전자주식회사 Storage device using host memory and operating method thereof
US11199992B2 (en) * 2019-07-15 2021-12-14 Western Digital Technologies, Inc. Automatic host buffer pointer pattern detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107688516A (en) * 2016-08-04 2018-02-13 三星电子株式会社 Storage device, test system and method for testing the storage device
CN110377221A (en) * 2018-04-13 2019-10-25 北京忆恒创源科技有限公司 Dual-port solid storage device and its data processing method
CN110618903A (en) * 2018-06-19 2019-12-27 北京忆恒创源科技有限公司 Electronic equipment testing method and device
CN112764669A (en) * 2019-11-01 2021-05-07 北京忆芯科技有限公司 Accelerator for a storage controller

Also Published As

Publication number Publication date
CN113468083A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
US10896136B2 (en) Storage system including secondary memory that predicts and prefetches data
JP2019067417A (en) Final level cache system and corresponding method
JP2019061699A (en) Device and method for providing cache movement with non-volatile mass memory system
TWI492061B (en) Selective enablement of operating modes or features via host transfer rate detection
CN113468083B (en) Dual-port NVMe controller and control method
KR20210038313A (en) Dynamically changing between latency-focused read operation and bandwidth-focused read operation
US11645011B2 (en) Storage controller, computational storage device, and operational method of computational storage device
US11048662B2 (en) User device including storage device and trim management method thereof
KR20170110810A (en) Data processing system and operating method thereof
US20240053917A1 (en) Storage device, operation method of storage device, and storage system using the same
KR20150041873A (en) Data processing system
US20230229357A1 (en) Storage controller, computational storage device, and operational method of computational storage device
CN114253461A (en) Mixed channel memory device
CN114253462A (en) Method for providing mixed channel memory device
KR102425470B1 (en) Data storage device and operating method thereof
CN113485643B (en) Method for data access and controller for data writing
CN213338708U (en) Control unit and storage device
CN113515234B (en) Method for controlling data read-out to host and controller
WO2024078012A1 (en) Solid-state drive configuration method, garbage collection method and related device
EP4148572B1 (en) Computational storage device and storage system including the computational storage device
US20230214258A1 (en) Storage controller and storage device
CN111367830B (en) Method for rebuilding FTL table with participation of host and storage device thereof
US20240086110A1 (en) Data storage method, storage apparatus and host
CN111258491B (en) Method and apparatus for reducing read command processing delay
WO2024088150A1 (en) Data storage method and apparatus based on open-channel solid state drive, device, medium, and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant