CN116382581A - Method, system, equipment and storage medium for accelerating execution of NVMe protocol - Google Patents

Method, system, equipment and storage medium for accelerating execution of NVMe protocol Download PDF

Info

Publication number
CN116382581A
CN116382581A CN202310312326.7A CN202310312326A CN116382581A CN 116382581 A CN116382581 A CN 116382581A CN 202310312326 A CN202310312326 A CN 202310312326A CN 116382581 A CN116382581 A CN 116382581A
Authority
CN
China
Prior art keywords
nvme
queue
channel
engine
solid state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310312326.7A
Other languages
Chinese (zh)
Inventor
郑俊飞
汪勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202310312326.7A priority Critical patent/CN116382581A/en
Publication of CN116382581A publication Critical patent/CN116382581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Exchange Systems With Centralized Control (AREA)
  • Communication Control (AREA)

Abstract

The invention provides a method, a system, equipment and a storage medium for accelerating the execution of NVMe protocol, wherein the method comprises the following steps: initializing PCIe and NVMe; reading a channel state register of an NVMe engine through an NVMe driver, acquiring a channel number with unassigned state, and configuring priority and bandwidth quota information into a channel configuration register of the NVMe engine; the method comprises the steps of executing an IO queue allocation greedy algorithm at regular time through an NVMe engine, calculating a channel IO queue allocation mode with higher system throughput rate, and updating an IO queue address set of a channel mapping table; and after the IO transmission of all channels is finished, constructing an IO queue creation command through an NVMe engine, and transmitting the IO queue creation command to the solid state disk through a management request/response queue so as to modify the number of IO request/response queues of the solid state disk. The invention ensures that all flow steps of the NVMe protocol are actually executed in parallel, and compared with a system for realizing the NVMe protocol by adopting a software scheme, the invention has higher calculation concurrency.

Description

Method, system, equipment and storage medium for accelerating execution of NVMe protocol
Technical Field
The present invention relates to the field of storage systems, and in particular, to a method, a system, an apparatus, and a storage medium for accelerating the execution of an NVMe protocol.
Background
With the development of information technologies such as artificial intelligence, internet of things and cloud computing, more and more intelligent hardware are emerging, the business data volume related to the intelligent hardware rises exponentially, the data storage performance requirement of a data center server is also higher and higher, and in order to improve the system storage performance, researchers design a plurality of storage protocols such as NVMe (Non-Volatile Memory express) protocols for improving the data storage performance, so that the storage system performance can be improved based on low-delay, multi-queue and high-concurrency mechanisms of the NVMe protocols.
The current common implementation method of the NVMe protocol is that, referring to the NVMe protocol, the system is divided into a host and a device, the host implements a host-side NVMe procedure specified by the NVMe protocol through an NVMe driver, and implements a block device driver, a file system and a storage application program on the NVMe driver, the device implements a device-side NVMe procedure specified by the NVMe protocol, and the host and the device are connected through a PCIe interface and implement data transmission between the host and the device through a multi-queue and a PCIe DMA mechanism of the NVMe protocol.
This practice can effectively accelerate storage system performance, but still has the following problems:
the NVMe protocol has lower transmission performance. Because the host-side NVMe protocol is based on software implementation and the process-oriented characteristic of the software algorithm, each step of the NVMe protocol is actually executed in series, although a symmetrical multiprocessor system is adopted at present, compared with a plurality of systems adopting hardware acceleration software, the calculation parallelism is still lower, and the processor utilization rate is also higher.
The difficulty of system application development is high. Because the NVMe protocol only prescribes the maximum number and depth of queues of the application, a developer needs to repeatedly debug according to the application scene to determine the optimal number and depth of queues, and the application development difficulty is high.
The system lacks quality of service management. Because the underlying software at the host side does not implement the quality of service control algorithm, when multiple processes with different priorities simultaneously occupy bandwidth, a service process with a high priority may be in a starvation state.
System bandwidth lacks load balancing. Because the host-side underlying software does not realize a load balancing mechanism, when a plurality of business processes with the same priority are operated, the system cannot dynamically adjust the number of the NVMe queues for the business processes according to the actual bandwidths of the processes.
Disclosure of Invention
In view of this, an objective of the embodiments of the present invention is to provide a method, a system, a computer device and a computer readable storage medium for accelerating the execution of an NVMe protocol, in which each flow step of the NVMe protocol is executed in parallel, and the computing concurrency is high.
Based on the above objects, an aspect of the embodiments of the present invention provides a method for accelerating the execution of NVMe protocol, including the following steps: initializing PCIe and NVMe; reading a channel state register of an NVMe engine through an NVMe driver, acquiring a channel number with unassigned state, and configuring priority and bandwidth quota information into a channel configuration register of the NVMe engine; the method comprises the steps of executing an IO queue allocation greedy algorithm at regular time through an NVMe engine, calculating a channel IO queue allocation mode with higher system throughput rate, and updating an IO queue address set of a channel mapping table; and after the IO transmission of all channels is finished, constructing an IO queue creation command through an NVMe engine, and transmitting the IO queue creation command to the solid state disk through a management request/response queue so as to modify the number of IO request/response queues of the solid state disk.
In some embodiments, the initializing PCIe and NVMe comprises: recognizing an NVMe solid state disk through a host PCIe drive, reading PCIe link state information of the NVMe solid state disk, and distributing a linear address for accessing a disk register to the NVMe solid state disk; and configuring the information of the NVMe solid state disk to a channel configuration register of an NVMe engine through a host PCIe (peripheral component interconnect express) driver.
In some embodiments, the initializing PCIe and NVMe comprises: creating management request/response queues with the same number as the NVMe solid state disks and IO request/response queues with the number larger than the number of the NVMe solid state disks in a memory through an NVMe driver, and configuring all queue addresses to a queue configuration register of an NVMe engine; and constructing an IO queue creation command of the NVMe protocol by utilizing the IO request/response queue address, and writing the IO queue creation command into a management request/response queue of the solid state disk corresponding to each channel.
In some embodiments, the method further comprises: and reading channel configuration information from the channel configuration register and writing the channel configuration information into a channel mapping table.
In some embodiments, the method further comprises: and returning the acquired channel numbers to a management process, creating a service process by the management process, and reading and writing data blocks from the solid state disk based on the channel numbers through the service process.
In some embodiments, the method further comprises: and reading an IO command from an IO channel through an NVMe engine, splitting the IO command into a plurality of parallel subcommands, and forwarding the subcommands to an IO queue set distributed to the IO channel by a greedy algorithm.
In some embodiments, the method further comprises: and notifying the solid state disk to acquire the IO command from the IO queue through the NVMe engine, and reading data from the Nand Flash address corresponding to the logical block address to the data block address or writing the data of the data block address into the Nand Flash according to the data block address of the IO command, the equipment end initial logical block address and the equipment end logical block number of the equipment end logical block.
In another aspect of the embodiment of the present invention, a system for accelerating the execution of NVMe protocol is provided, including: the initialization module is configured to initialize PCIe and NVMe; the allocation module is configured to read a channel state register of the NVMe engine through the NVMe drive, acquire a channel number with an unallocated state, and allocate the priority and the bandwidth quota information into a channel allocation register of the NVMe engine; the computing module is configured to execute an IO queue allocation greedy algorithm at regular time through the NVMe engine, compute a channel IO queue allocation mode which enables the system throughput rate to be higher, and update an IO queue address set of the channel mapping table; and the execution module is configured to respond to the completion of IO transmission of all channels, construct an IO queue creation command through the NVMe engine, and transmit the IO queue creation command to the solid state disk through the management request/response queue so as to modify the number of the IO request/response queues of the solid state disk.
In yet another aspect of the embodiment of the present invention, there is also provided a computer apparatus, including: at least one processor; and a memory storing computer instructions executable on the processor, which when executed by the processor, perform the steps of the method as above.
In yet another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method steps as described above.
The invention has the following beneficial technical effects:
(1) The NVMe protocol has higher transmission performance. Because the host-side NVMe protocol is based on hardware realization, the pipeline execution characteristic of the hardware digital logic circuit ensures that all flow steps of the NVMe protocol are actually executed in parallel, and compared with a system for realizing the NVMe protocol by adopting a software scheme, the system has higher calculation concurrency and lower processor utilization rate of the host;
(2) The system has lower application and development difficulty. Because the host end NVMe driver only provides a data transmission interface, the NVMe driver maintains the service process data block and LBA mapping, further realizes service quality management, load balancing and the like by the NVMe acceleration engine and transmits data to the NVMe SSD, and an application developer does not need to realize the above content, so the application development difficulty is lower;
(3) The system of the present invention is provided with quality of service management. Because the NVMe acceleration engine at the host side realizes an applied service quality control algorithm, when the service processes of the system are more and have different priorities, the plurality of processes simultaneously occupy the bandwidth resources, but the service processes with high priorities cannot obtain the effective bandwidth resources and are in a starvation state;
(4) The system of the invention has bandwidth load balancing. Because the NVMe acceleration engine at the host side realizes an applied bandwidth load balancing mechanism, when the system has a plurality of business processes with the same priority but different actual transmission bandwidths, the system can allocate a corresponding number of NVMe queues for the processes according to the actual bandwidth demands of the processes, so that the bandwidths are fully utilized.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an embodiment of a method for accelerating NVMe protocol according to the present invention;
FIG. 2 is a flow chart of a method for accelerating the execution of NVMe protocol according to the present invention;
FIG. 3 is a schematic diagram of an initialization process according to the present invention;
FIG. 4 is a schematic diagram of IO request/response flow provided by the present invention;
FIG. 5 is a schematic diagram of an embodiment of a system for accelerating NVMe protocol according to the present invention;
FIG. 6 is a schematic diagram of a hardware structure of an embodiment of a computer device for accelerating the execution of NVMe protocol according to the present invention;
fig. 7 is a schematic diagram of an embodiment of a computer storage medium for accelerating the execution of NVMe protocol according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiments of the present invention, all the expressions "first" and "second" are used to distinguish two entities with the same name but different entities or different parameters, and it is noted that the "first" and "second" are only used for convenience of expression, and should not be construed as limiting the embodiments of the present invention, and the following embodiments are not described one by one.
In a first aspect of the embodiment of the present invention, an embodiment of a method for accelerating the execution of NVMe protocol is provided. Fig. 1 is a schematic diagram of an embodiment of a method for accelerating the execution of NVMe protocol according to the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, initializing PCIe and NVMe;
s2, reading a channel state register of the NVMe engine through NVMe drive, acquiring a channel number with unassigned state, and configuring priority and bandwidth quota information into a channel configuration register of the NVMe engine;
s3, executing an IO queue allocation greedy algorithm at regular time through the NVMe engine, calculating a channel IO queue allocation mode which enables the throughput rate of the system to be higher, and updating an IO queue address set of a channel mapping table; and
s4, after the IO transmission of all channels is finished, an IO queue creation command is constructed through an NVMe engine, and the IO queue creation command is transmitted to the solid state disk through a management request/response queue so as to modify the number of IO request/response queues of the solid state disk.
The greedy algorithm is called greedy algorithm, that is, when a certain problem is solved, a method which is currently seen as the optimal selection is always selected in each step, namely, a local optimal strategy is executed each time, so that the greedy algorithm cannot necessarily acquire a global optimal solution of the problem, only a local optimal solution close to the global optimal solution can be acquired, and the algorithm cannot acquire the global optimal solution, but because the algorithm is simple to realize, a large amount of processor time slices can be avoided from being consumed due to the fact that all possible situations of acquiring the global optimal solution are exhausted.
The invention provides a method for accelerating the execution of an NVMe protocol based on a greedy algorithm, which solves the problems in the prior art, such as high utilization rate of a host processor, low concurrency performance of a system, lack of service quality control, lack of load balancing and the like.
The technical scheme adopted by the invention is as follows:
the whole system is divided into a host end and a device end according to functions, and the host end and the device end are connected through a PCIe interface. The host side provides modules such as PCIe drive, NVMe engine, NVMe drive, management process, service process and the like, and is used for realizing a host side NVMe protocol flow specified by the NVMe protocol standard and accelerating the execution of the NVMe protocol through the NVMe engine. The equipment side provides a plurality of NVMe SSDs for realizing equipment side NVMe protocol flow specified by the NVMe protocol standard.
Fig. 2 is an execution flow chart of the method for accelerating the execution of the NVMe protocol, and as shown in fig. 2, the system is divided into a host and an NVMe SSD (solid state disk).
The functions of each module of the host are as follows:
(1) And (5) managing the process. The method is used for managing the business process and comprises the following sub-modules:
a) And initializing a system. For initializing the NVMe storage system;
b) Business process creation/revocation. For creating/revoking business processes that require stored data.
(2) Business processes. The application-specific storage business process includes one to a plurality of data block addresses.
(3) NVMe drive. The system is used for driving the NVMe acceleration engine to work and comprises the following sub-modules:
a) Queue creation/deletion. Management and IO request/response queue memory specified by the application NVMe protocol;
b) Channel allocation/reclamation. For assigning a channel number representing an NVMe engine specific IO channel;
c) Data block-LBA mapping. Mapping the data block address of the business process to the LBA of the disk;
d) IO command commit/status read. For writing IO commands to the NVMe engine and reading command execution status.
(4) NVMe engine. The hardware circuit is used for accelerating the execution of the NVMe protocol, and comprises a management flow and an IO flow.
The management flow of the NVMe engine comprises the following sub-modules:
a) A queue configuration register. A management and IO request/response queue address for receiving NVMe drive configuration;
b) Queue configuration updates. For synchronously updating the contents of the queue configuration register to the channel mapping table;
c) Channel configuration/status registers. The method comprises the steps of receiving IO channel information such as transmission priority, bandwidth quota and the like configured by NVMe driving, and displaying data transmission state information of the IO channel;
d) Channel configuration update/status read. The method is used for synchronously updating the channel configuration register information to the channel mapping table and synchronously updating the state information of the IO channel to the state register for reading by an NVMe driver;
e) A channel mapping table. The channel configuration and status information is contained, including one-to-one channel register address-whether to allocate-whether to be abnormal-disk BAR-disk capacity-priority-negotiated link maximum bandwidth-bandwidth quota-commit bandwidth-management request/response queue address- - (IO request/response queue address set);
f) Command request/response forwarding is managed. The management command response queue is used for writing management command request information of the NVMe protocol into the management request queue and reading execution result information of the management command from the management command response queue;
g) The request/response queues are managed. For passing management command request/response information between the host and the disk.
The IO flow of the NVMe engine comprises the following sub-modules:
a) IO command request/response registers. The system comprises an NVMe engine, an NVMe driver and an NVMe controller, wherein the NVMe engine is used for receiving IO commands of the NVMe driver configuration and returning execution result information of the IO commands to the NVMe driver;
b) IO commit bandwidth statistics. The method comprises the steps of calculating average IO submission bandwidth of a corresponding channel of a business process in a period of time according to data length information contained in an IO command submitted by an NVMe driver;
c) IO command request/response forwarding. The IO channel is used for forwarding the IO command contained in the IO command request register to the IO channel and forwarding IO command response information contained in the IO channel to the IO command response register;
d) IO channel. Splitting and forwarding IO command requests driven by NVMe to a plurality of IO request queues, merging a plurality of IO response information into 1 IO response from the IO response queues, and realizing parallel execution of IO commands;
e) The IO request/response queue assigns a greedy algorithm. And the system is used for reassigning all IO request/response queues of the system to different IO channels according to a greedy algorithm according to the IO channel state information contained in the channel mapping table.
(5) PCIe drive. The external PCIe interface NVMe SSD disk for initializing the system comprises the following sub-modules:
a) Disk enumeration. The NVMe SSD disk is used for identifying a host PCIe interface through a depth-first search algorithm;
b) BAR mapping. The method comprises the steps of allocating a host linear address to a BAR register contained in an NVMe SSD disk;
c) Disk 1 to nBAR. Each disk is assigned a host linear address by the BAR mapping module.
The NVMe SSD has the following functions:
(1) BAR register. The register space of the disk contains the following sub-modules:
a) The request/response queue address register is managed. For receiving NVMe management command request/response queue addresses;
b) IO request/response queue address registers 1-m. For receiving NVMe IO command request/response queue addresses;
(2) Control logic. The command processing module of the disk comprises the following sub-modules:
a) Manage queue command processing/responses. The method comprises the steps of processing an NVMe management command and then returning a command execution result;
b) IO queue command processing/response. The method comprises the steps of processing an NVMe IO command and then returning a command execution result;
c) DMA control. For performing DMA according to read-write control of IO commands, SLBA, NLB, data pointer, etc.;
d) And (5) interrupting control. For issuing an interrupt signal to inform the host when the disc transfer is completed or an abnormality occurs.
(3) Nand Flash. For storing data blocks from the host.
PCIe and NVMe are initialized.
In some embodiments, the initializing PCIe and NVMe comprises: recognizing an NVMe solid state disk through a host PCIe drive, reading PCIe link state information of the NVMe solid state disk, and distributing a linear address for accessing a disk register to the NVMe solid state disk; and configuring the information of the NVMe solid state disk to a channel configuration register of an NVMe engine through a host PCIe (peripheral component interconnect express) driver.
The PCIe initialization flow is as follows:
the host PCIe driver uses a depth-first search algorithm to identify the NVMe SSD disk and reads the PCIe link state information of the disk, such as the maximum link bandwidth supported by the disk and the maximum link bandwidth negotiated by the host; the host PCIe driver distributes linear addresses for accessing the disk registers to the disk through the BAR mapping module; the host PCIe driver configures disk information such as BAR address, negotiated maximum link bandwidth, etc. to the channel configuration register of the NVMe engine, which then synchronously updates the above information to the channel map from the register.
In some embodiments, the initializing PCIe and NVMe comprises: creating management request/response queues with the same number as the NVMe solid state disks and IO request/response queues with the number larger than the number of the NVMe solid state disks in a memory through an NVMe driver, and configuring all queue addresses to a queue configuration register of an NVMe engine; and constructing an IO queue creation command of the NVMe protocol by utilizing the IO request/response queue address, and writing the IO queue creation command into a management request/response queue of the solid state disk corresponding to each channel.
The NVMe initialization scheme is as follows:
the host management process executes a system initialization interface, and invokes an NVMe drive queue creation interface therein; the NVMe driver creates management request/response queues with the same number as the number of disks and IO request/response queues with the number larger than the number of disks in a memory, and then configures all queue addresses to a queue configuration register of the NVMe engine; the queue configuration updating module of the NVMe engine acquires all queue addresses from the register; the queue configuration updating module of the NVMe engine writes all queue addresses into a channel mapping table, allocates a management request/response queue for each channel, and equally allocates IO request/response queues for all channels; the management command request/response forwarding module of the NVMe engine constructs an IO queue creation command of the NVMe protocol by utilizing an IO request/response queue address, writes the command into a management request/response queue of an SSD disk corresponding to each channel, and informs a disk reading command by configuring a dellbell register of the SSD disk; the NVMe SSD reads out an IO queue creation command from the management request queue, analyzes and stores the IO request/response queue address from the command, and writes command response information into the management response queue.
Fig. 3 is a schematic diagram of an initialization flow provided in the present invention, and details of PCIe and NVMe initialization can be seen in fig. 3.
And reading a channel state register of the NVMe engine through the NVMe driver, acquiring a channel number with unassigned state, and configuring the priority and bandwidth quota information into a channel configuration register of the NVMe engine.
In some embodiments, the method further comprises: and reading channel configuration information from the channel configuration register and writing the channel configuration information into a channel mapping table.
In some embodiments, the method further comprises: and returning the acquired channel numbers to a management process, creating a service process by the management process, and reading and writing data blocks from the solid state disk based on the channel numbers through the service process.
The channel allocation flow is as follows:
a) The NVMe engine power-on reset stage configures all channels in the channel mapping table into an unassigned state;
b) The channel state reading module of the NVMe engine synchronizes the channel state of the channel mapping table to the channel state register, wherein the channel state comprises information such as whether the channel state is allocated, whether the channel state is abnormal, disk capacity and the like;
c) The management process executes a business process creation interface, invokes an NVMe-driven channel allocation interface therein, and transmits information such as priority, bandwidth quota and the like of the business process;
d) The NVMe driver reads a channel state register of the NVMe engine, acquires a channel number with an unassigned state, and then configures information such as priority, bandwidth quota and the like into a channel configuration register of the NVMe engine;
e) The channel configuration updating module of the NVMe engine reads channel configuration information from the channel configuration register;
f) The channel configuration updating module of the NVMe engine writes the channel configuration information into the channel mapping table;
g) The NVMe driver modifies the channel acquired in the step d) into an allocated state by writing a channel state register;
h) The NVMe driver returns the channel number obtained in the step d) to the management process, the management process creates a service process, and the later service process reads and writes the data block from the SSD based on the channel number.
And (3) executing an IO queue allocation greedy algorithm at regular time by using the NVMe engine, calculating a channel IO queue allocation mode with higher system throughput rate, and updating an IO queue address set of the channel mapping table. And after the IO transmission of all channels is finished, constructing an IO queue creation command through an NVMe engine, and transmitting the IO queue creation command to the solid state disk through a management request/response queue so as to modify the number of IO request/response queues of the solid state disk.
The greedy algorithm flow is as follows:
the IO submission bandwidth statistics module of the NVMe engine monitors IO commands submitted by NVMe drive to the IO command request registers 1-n, and calculates the average IO submission bandwidth of the channel according to the data length information of the IO commands in a period of time; the IO submission bandwidth statistics module of the NVMe engine writes the IO submission bandwidth into the channel mapping table; the NVMe engine executes an IO queue allocation greedy algorithm at regular time, executes the greedy algorithm according to the maximum bandwidth, bandwidth quota, submitted bandwidth and the like of a link negotiated by a channel, calculates a channel IO queue allocation mode which enables the throughput rate of the system to be higher, and then triggers a channel updating module to update an IO queue address set of a channel mapping table; after all channel IO transmission is completed, the NVMe engine constructs an IO queue creation command, and transmits the IO queue creation command to the SSD disk through the management request/response queue, so that the number of IO request/response queues of the disk is modified.
And calculating the estimated IO queue number of each channel. PCIe link total bandwidth = sum of maximum link bandwidths negotiated by all disks 1-n and host; single IO queue bandwidth= (PCIe link total bandwidth/total IO queue number); a single channel may allocate a minimum of bandwidth= (disk negotiated link maximum bandwidth, bandwidth quota, commit bandwidth); estimated number of IO queues for a single lane= (single lane allocable bandwidth/PCIe link total bandwidth total number of IO queues). A greedy algorithm is performed. According to the greedy algorithm principle that a part of articles can be loaded, the backpack capacity is preferentially distributed to the articles with high value, and then the redundant backpack capacity is sequentially distributed to the articles with lower value. The article value is channel priority and estimated IO queue number, so that from the total IO queue number, the IO queue is firstly allocated to the channel with high priority, then the IO queue is allocated to the channel with low priority, and when the channel priorities are the same, the IO queue is firstly allocated to the channel with large estimated IO queue number. Taking the example that the channel number=3, the maximum allocable IO queue number "=6, the 3 channels" priority "are 7, 3, 1, and 3 channels" the estimated IO queue number "is 3, 4, and 5, respectively, the following IO queue allocation policy can be obtained: 1) 3 queues are allocated to the channel 1, and the remaining 3 queues; 2) 3 queues are allocated to the channel 2, and no residual queues exist; 3) Channel 3 cannot be allocated to a queue.
In some embodiments, the method further comprises: and reading an IO command from an IO channel through an NVMe engine, splitting the IO command into a plurality of parallel subcommands, and forwarding the subcommands to an IO queue set distributed to the IO channel by a greedy algorithm.
In some embodiments, the method further comprises: and notifying the solid state disk to acquire the IO command from the IO queue through the NVMe engine, and reading data from the Nand Flash address corresponding to the logical block address to the data block address or writing the data of the data block address into the Nand Flash according to the data block address of the IO command, the equipment end initial logical block address and the equipment end logical block number of the equipment end logical block.
Fig. 4 is a schematic diagram of an IO request/response flow provided by the present invention, as shown in fig. 4, the IO request flow is as follows:
the service process calls the NVMe driven data block and LBA mapping interface, and transmits the transmission direction (reading or writing), the data block address and the channel number parameter acquired during the service process creation; the NVMe driver maps the address of the data block to a certain LBA according to the disk capacity corresponding to the channel, constructs an NVMe protocol IO data read-write command by utilizing the address of the data block and the LBA, and then writes the command into an IO command submitting module; the IO command submitting module writes the IO command into an IO command request register corresponding to the NVMe engine and the channel; an IO command request/response forwarding module of the NVMe engine reads an IO command from an IO command request register; the IO command request/response forwarding module of the NVMe engine forwards the IO command to the corresponding IO channel; the IO splitting/aggregating module of the NVMe engine reads IO commands from the IO channel and splits the commands into a plurality of parallel subcommands, and then forwards the subcommands to the IO queue set distributed to the channel by a greedy algorithm; and the IO splitting/aggregating module is used for updating a BAR space dellball register of the corresponding disc of the channel, informing the disc to acquire an IO command from the IO queue, and reading data from a Nand Flash address corresponding to the LBA to the data block address or writing the data of the data block address into the Nand Flash according to the data block address, SLBA, NLB and the like of the IO command. The IO response is the same as the IO request flow path, and the directions are opposite, and will not be described in detail here.
According to the method for accelerating the NVMe protocol management flow, disclosed by the embodiment of the invention, the NVMe management command execution flow is realized through hardware, so that the utilization rate of a host processor and the difficulty of software development are reduced; according to the embodiment of the invention, with reference to the NVMe protocol IO flow acceleration method realized by the NVMe protocol, the NVMe IO command execution flow is realized by hardware, so that the utilization rate of a host processor is reduced, and the transmission performance is improved; the embodiment of the invention realizes the service process service quality control strategy and the bandwidth load balancing mechanism based on the greedy algorithm, so that the NVMe acceleration engine can dynamically allocate and adjust the number of the NVMe IO queues for each service process according to the priority of the service process and the actual storage bandwidth requirement, thereby fully utilizing the storage bandwidth resource of the system.
It should be noted that, the steps in the embodiments of the method for accelerating the execution of the NVMe protocol may be intersected, replaced, added and deleted, so that the method for accelerating the execution of the NVMe protocol by the reasonable permutation and combination should also belong to the protection scope of the present invention, and should not limit the protection scope of the present invention to the embodiments.
Based on the above object, a second aspect of the embodiments of the present invention proposes a system for accelerating the execution of NVMe protocol. As shown in fig. 5, the system 200 includes the following modules: the initialization module is configured to initialize PCIe and NVMe; the allocation module is configured to read a channel state register of the NVMe engine through the NVMe drive, acquire a channel number with an unallocated state, and allocate the priority and the bandwidth quota information into a channel allocation register of the NVMe engine; the computing module is configured to execute an IO queue allocation greedy algorithm at regular time through the NVMe engine, compute a channel IO queue allocation mode which enables the system throughput rate to be higher, and update an IO queue address set of the channel mapping table; and the execution module is configured to respond to the completion of IO transmission of all channels, construct an IO queue creation command through the NVMe engine, and transmit the IO queue creation command to the solid state disk through the management request/response queue so as to modify the number of the IO request/response queues of the solid state disk.
In some embodiments, the initialization module is configured to: recognizing an NVMe solid state disk through a host PCIe drive, reading PCIe link state information of the NVMe solid state disk, and distributing a linear address for accessing a disk register to the NVMe solid state disk; and configuring the information of the NVMe solid state disk to a channel configuration register of an NVMe engine through a host PCIe (peripheral component interconnect express) driver.
In some embodiments, the initialization module is configured to: creating management request/response queues with the same number as the NVMe solid state disks and IO request/response queues with the number larger than the number of the NVMe solid state disks in a memory through an NVMe driver, and configuring all queue addresses to a queue configuration register of an NVMe engine; and constructing an IO queue creation command of the NVMe protocol by utilizing the IO request/response queue address, and writing the IO queue creation command into a management request/response queue of the solid state disk corresponding to each channel.
In some embodiments, the system further comprises a reading module configured to: and reading channel configuration information from the channel configuration register and writing the channel configuration information into a channel mapping table.
In some embodiments, the system further comprises a process module configured to: and returning the acquired channel numbers to a management process, creating a service process by the management process, and reading and writing data blocks from the solid state disk based on the channel numbers through the service process.
In some embodiments, the system further comprises a splitting module configured to: and reading an IO command from an IO channel through an NVMe engine, splitting the IO command into a plurality of parallel subcommands, and forwarding the subcommands to an IO queue set distributed to the IO channel by a greedy algorithm.
In some embodiments, the system further comprises a command module: and notifying the solid state disk to acquire the IO command from the IO queue through the NVMe engine, and reading data from the Nand Flash address corresponding to the logical block address to the data block address or writing the data of the data block address into the Nand Flash according to the data block address of the IO command, the equipment end initial logical block address and the equipment end logical block number of the equipment end logical block.
In view of the above object, a third aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, initializing PCIe and NVMe; s2, reading a channel state register of the NVMe engine through NVMe drive, acquiring a channel number with unassigned state, and configuring priority and bandwidth quota information into a channel configuration register of the NVMe engine; s3, executing an IO queue allocation greedy algorithm at regular time through the NVMe engine, calculating a channel IO queue allocation mode which enables the throughput rate of the system to be higher, and updating an IO queue address set of a channel mapping table; and S4, after the IO transmission of all channels is finished, constructing an IO queue creation command through an NVMe engine, and transmitting the IO queue creation command to the solid state disk through a management request/response queue so as to modify the number of IO request/response queues of the solid state disk.
In some embodiments, the initializing PCIe and NVMe comprises: recognizing an NVMe solid state disk through a host PCIe drive, reading PCIe link state information of the NVMe solid state disk, and distributing a linear address for accessing a disk register to the NVMe solid state disk; and configuring the information of the NVMe solid state disk to a channel configuration register of an NVMe engine through a host PCIe (peripheral component interconnect express) driver.
In some embodiments, the initializing PCIe and NVMe comprises: creating management request/response queues with the same number as the NVMe solid state disks and IO request/response queues with the number larger than the number of the NVMe solid state disks in a memory through an NVMe driver, and configuring all queue addresses to a queue configuration register of an NVMe engine; and constructing an IO queue creation command of the NVMe protocol by utilizing the IO request/response queue address, and writing the IO queue creation command into a management request/response queue of the solid state disk corresponding to each channel.
In some embodiments, the steps further comprise: and reading channel configuration information from the channel configuration register and writing the channel configuration information into a channel mapping table.
In some embodiments, the steps further comprise: and returning the acquired channel numbers to a management process, creating a service process by the management process, and reading and writing data blocks from the solid state disk based on the channel numbers through the service process.
In some embodiments, the steps further comprise: and reading an IO command from an IO channel through an NVMe engine, splitting the IO command into a plurality of parallel subcommands, and forwarding the subcommands to an IO queue set distributed to the IO channel by a greedy algorithm.
In some embodiments, the steps further comprise: and notifying the solid state disk to acquire the IO command from the IO queue through the NVMe engine, and reading data from the Nand Flash address corresponding to the logical block address to the data block address or writing the data of the data block address into the Nand Flash according to the data block address of the IO command, the equipment end initial logical block address and the equipment end logical block number of the equipment end logical block.
As shown in fig. 6, a hardware structure diagram of an embodiment of the computer device for accelerating the execution of the NVMe protocol according to the present invention is shown.
Taking the example of the apparatus shown in fig. 6, a processor 301 and a memory 302 are included in the apparatus.
The processor 301 and the memory 302 may be connected by a bus or otherwise, for example in fig. 6.
The memory 302 is used as a non-volatile computer readable storage medium, and may be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions/modules corresponding to the method for accelerating the execution of the NVMe protocol in the embodiments of the present application. The processor 301 executes various functional applications of the server and data processing, i.e., implements a method of accelerating the execution of the NVMe protocol, by running nonvolatile software programs, instructions, and modules stored in the memory 302.
Memory 302 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of a method of accelerating the execution of the NVMe protocol, and the like. In addition, memory 302 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 302 may optionally include memory located remotely from processor 301, which may be connected to the local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more computer instructions 303 corresponding to a method for accelerating the execution of the NVMe protocol are stored in the memory 302, which when executed by the processor 301, perform the method for accelerating the execution of the NVMe protocol in any of the method embodiments described above.
Any one embodiment of the computer device executing the method for accelerating the execution of the NVMe protocol can achieve the same or similar effect as any one embodiment of the method corresponding to the embodiment.
The present invention also provides a computer-readable storage medium storing a computer program that when executed by a processor performs a method of accelerating the execution of NVMe protocol.
Fig. 7 is a schematic diagram of an embodiment of the computer storage medium for accelerating the execution of the NVMe protocol according to the present invention. Taking a computer storage medium as shown in fig. 7 as an example, the computer readable storage medium 401 stores a computer program 402 that when executed by a processor performs the above method.
Finally, it should be noted that, as will be understood by those skilled in the art, implementing all or part of the above-mentioned embodiments of the method may be implemented by a computer program to instruct related hardware, and the program for accelerating the method for executing the NVMe protocol may be stored in a computer readable storage medium, where the program may include the steps of the embodiments of the above-mentioned methods when executed. The storage medium of the program may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (RAM), or the like. The computer program embodiments described above may achieve the same or similar effects as any of the method embodiments described above.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.

Claims (10)

1. A method for accelerating the execution of NVMe protocol, comprising the steps of:
initializing PCIe and NVMe;
reading a channel state register of an NVMe engine through an NVMe driver, acquiring a channel number with unassigned state, and configuring priority and bandwidth quota information into a channel configuration register of the NVMe engine;
the method comprises the steps of executing an IO queue allocation greedy algorithm at regular time through an NVMe engine, calculating a channel IO queue allocation mode with higher system throughput rate, and updating an IO queue address set of a channel mapping table; and
and after the IO transmission of all channels is finished, constructing an IO queue creation command through an NVMe engine, and transmitting the IO queue creation command to the solid state disk through a management request/response queue so as to modify the number of IO request/response queues of the solid state disk.
2. The method of claim 1, wherein initializing PCIe and NVMe comprises:
recognizing an NVMe solid state disk through a host PCIe drive, reading PCIe link state information of the NVMe solid state disk, and distributing a linear address for accessing a disk register to the NVMe solid state disk; and
and configuring the information of the NVMe solid state disk to a channel configuration register of an NVMe engine through a host PCIe (peripheral component interconnect express) driver.
3. The method of claim 1, wherein initializing PCIe and NVMe comprises:
creating management request/response queues with the same number as the NVMe solid state disks and IO request/response queues with the number larger than the number of the NVMe solid state disks in a memory through an NVMe driver, and configuring all queue addresses to a queue configuration register of an NVMe engine; and
and constructing an IO queue creation command of the NVMe protocol by utilizing the IO request/response queue address, and writing the IO queue creation command into a management request/response queue of the solid state disk corresponding to each channel.
4. The method according to claim 1, wherein the method further comprises:
and reading channel configuration information from the channel configuration register and writing the channel configuration information into a channel mapping table.
5. The method according to claim 4, wherein the method further comprises:
and returning the acquired channel numbers to a management process, creating a service process by the management process, and reading and writing data blocks from the solid state disk based on the channel numbers through the service process.
6. The method according to claim 1, wherein the method further comprises:
and reading an IO command from an IO channel through an NVMe engine, splitting the IO command into a plurality of parallel subcommands, and forwarding the subcommands to an IO queue set distributed to the IO channel by a greedy algorithm.
7. The method of claim 6, wherein the method further comprises:
and notifying the solid state disk to acquire the IO command from the IO queue through the NVMe engine, and reading data from the Nand Flash address corresponding to the logical block address to the data block address or writing the data of the data block address into the Nand Flash according to the data block address of the IO command, the equipment end initial logical block address and the equipment end logical block number of the equipment end logical block.
8. A system for accelerating the execution of NVMe protocol, comprising:
the initialization module is configured to initialize PCIe and NVMe;
the allocation module is configured to read a channel state register of the NVMe engine through the NVMe drive, acquire a channel number with an unallocated state, and allocate the priority and the bandwidth quota information into a channel allocation register of the NVMe engine;
the computing module is configured to execute an IO queue allocation greedy algorithm at regular time through the NVMe engine, compute a channel IO queue allocation mode which enables the system throughput rate to be higher, and update an IO queue address set of the channel mapping table; and
and the execution module is configured to respond to the completion of IO transmission of all channels, construct an IO queue creation command through the NVMe engine, and transmit the IO queue creation command to the solid state disk through the management request/response queue so as to modify the number of the IO request/response queues of the solid state disk.
9. A computer device, comprising:
at least one processor; and
a memory storing computer instructions executable on the processor, which when executed by the processor, perform the steps of the method of any one of claims 1-7.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any one of claims 1-7.
CN202310312326.7A 2023-03-24 2023-03-24 Method, system, equipment and storage medium for accelerating execution of NVMe protocol Pending CN116382581A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310312326.7A CN116382581A (en) 2023-03-24 2023-03-24 Method, system, equipment and storage medium for accelerating execution of NVMe protocol

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310312326.7A CN116382581A (en) 2023-03-24 2023-03-24 Method, system, equipment and storage medium for accelerating execution of NVMe protocol

Publications (1)

Publication Number Publication Date
CN116382581A true CN116382581A (en) 2023-07-04

Family

ID=86964024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310312326.7A Pending CN116382581A (en) 2023-03-24 2023-03-24 Method, system, equipment and storage medium for accelerating execution of NVMe protocol

Country Status (1)

Country Link
CN (1) CN116382581A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755639A (en) * 2023-08-18 2023-09-15 深圳大普微电子科技有限公司 Performance evaluation method and related device of flash memory interface
CN116795735A (en) * 2023-08-23 2023-09-22 四川云海芯科微电子科技有限公司 Solid state disk space allocation method, device, medium and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116755639A (en) * 2023-08-18 2023-09-15 深圳大普微电子科技有限公司 Performance evaluation method and related device of flash memory interface
CN116755639B (en) * 2023-08-18 2024-03-08 深圳大普微电子科技有限公司 Performance evaluation method and related device of flash memory interface
CN116795735A (en) * 2023-08-23 2023-09-22 四川云海芯科微电子科技有限公司 Solid state disk space allocation method, device, medium and system
CN116795735B (en) * 2023-08-23 2023-11-03 四川云海芯科微电子科技有限公司 Solid state disk space allocation method, device, medium and system

Similar Documents

Publication Publication Date Title
KR102624607B1 (en) Rack-level scheduling for reducing the long tail latency using high performance ssds
US11700300B2 (en) Cluster resource management in distributed computing systems
US20220382460A1 (en) Distributed storage system and data processing method
US9092266B2 (en) Scalable scheduling for distributed data processing
CN116382581A (en) Method, system, equipment and storage medium for accelerating execution of NVMe protocol
CN108431796B (en) Distributed resource management system and method
US9977618B2 (en) Pooling of memory resources across multiple nodes
US10241836B2 (en) Resource management in a virtualized computing environment
US20080162735A1 (en) Methods and systems for prioritizing input/outputs to storage devices
KR20160087706A (en) Apparatus and method for resource allocation of a distributed data processing system considering virtualization platform
US20170010919A1 (en) Dynamic weight accumulation for fair allocation of resources in a scheduler hierarchy
US9262351B2 (en) Inter-adapter cooperation for multipath input/output systems
US11734172B2 (en) Data transmission method and apparatus using resources in a resource pool of a same NUMA node
US10359945B2 (en) System and method for managing a non-volatile storage resource as a shared resource in a distributed system
CN110389825B (en) Method, apparatus and computer program product for managing dedicated processing resources
EP3358795B1 (en) Method and apparatus for allocating a virtual resource in network functions virtualization (nfv) network
CN114860387B (en) I/O virtualization method of HBA controller for virtualization storage application
US9507637B1 (en) Computer platform where tasks can optionally share per task resources
JP2014186411A (en) Management device, information processing system, information processing method and program
CN113986137A (en) Storage device and storage system
US20220222010A1 (en) Advanced interleaving techniques for fabric based pooling architectures
US11928517B2 (en) Feature resource self-tuning and rebalancing
CN115469979A (en) Scheduling device and method for quantum control system and quantum computer
US11704056B2 (en) Independent set data lanes for IOD SSD
WO2018173300A1 (en) I/o control method and i/o control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination