CN112463064A - I/O instruction management method and device based on double linked list structure - Google Patents
I/O instruction management method and device based on double linked list structure Download PDFInfo
- Publication number
- CN112463064A CN112463064A CN202011414847.6A CN202011414847A CN112463064A CN 112463064 A CN112463064 A CN 112463064A CN 202011414847 A CN202011414847 A CN 202011414847A CN 112463064 A CN112463064 A CN 112463064A
- Authority
- CN
- China
- Prior art keywords
- instruction
- linked list
- disk device
- current
- target disk
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Abstract
The invention provides an I/O instruction management method and device based on a double linked list structure, wherein the method comprises the steps of reading an I/O instruction from an I/O instruction queue in a memory; generating an equipment linked list according to the target disk equipment of the I/O instruction, and generating an instruction linked list corresponding to the target disk equipment according to the I/O instruction; selecting a first physical channel by polling the physical channels; selecting a first disk device from the device linked list; selecting a first I/O instruction from an instruction chain table corresponding to the first disk device; and issuing the first I/O instruction and the first disk device to the first physical channel. The invention determines the concurrency of I/O according to the actual state of the system, avoids the condition that the channel enters a state waiting for dispatching response due to the full FIFO, and improves the processing performance of the system.
Description
Technical Field
The invention belongs to the field of disk array reading and writing, and particularly relates to an I/O instruction management method and device based on a double-linked list structure.
Background
The storage system is an infrastructure meeting the read-write requirements of a large number of data applications, and the input/output (I/O) performance of the storage system directly determines the data read-write efficiency and is a key factor influencing the overall performance of the data applications. The strategy of issuing I/O by application layer software, the management and scheduling of I/O instructions by RAID card software and hardware, the bandwidth and data exchange performance of expansion card Expander, the bandwidth of bottom layer physical channel, the access rate of disk itself, etc., each of which may affect the I/O performance of the storage system.
As shown in fig. 1, in a typical disk Array system, a host implements access to thousands of physical disks through raid (redundant Array of Independent disks) cards and expansion cards (expanders), such as SAS (serial Attached scsi) solid state disks, SAS mechanical disks, sata (serial AT attachment) mechanical disks, and so on. Host application layer software issues a large number of I/O (input/output) instructions for reading and writing the disk, and waits for the response of the disk. Before being issued to a disk, the instructions are generally buffered in an I/O queue of a host memory (DDR) in a FIFO manner, management of the I/O instructions is completed by driver software or RAID card hardware, and an appropriate physical channel is selected according to a target disk address of I/O and a network topology of a storage system to complete transmission and reception of the I/O instructions, where each physical channel shown in fig. 1 corresponds to one PHY. Commands in the I/O queue support out-of-order dispatch, out-of-order return. Typically, a disk array system needs to support concurrent execution of up to 4K-8K I/O instructions. And the instructions have a certain life cycle in the execution process, and each I/O instruction is divided into a plurality of stages to be executed.
For example, a SAS disk may be used, a write command is usually executed in 4 or more than 4 stages, and the link may be occupied by other I/O during the interval of each stage. Very frequent scheduling and management of these I/os is required whereas the prior art is based mainly on management of I/O instructions by FIFO, i.e. first come first served mechanism.
Referring to fig. 2, in order to facilitate hardware fast access and pipeline operations, a part of memory space is usually opened up in the SRAM for storing the I/O instruction to be executed and the I/O instruction being suspended in execution, context information (command control word) related to the I/O instruction, and the execution completion status of the I/O instruction, which are also managed by the FIFO mechanism. The specific execution flow is described as follows:
1. the host application layer software opens up a continuous storage space in a memory (DDR) for storing I/O instructions and issues the I/O instructions to an I/O queue FIFO located in the memory (DDR).
2. When the bottom layer driving software judges that the Pending command queue FIFO has an idle slot position, an I/O instruction is read from a memory (DDR) and added into the Pending command queue FIFO.
3. And the command execution state machine polls the states of all channels under the current port and judges whether an idle channel exists or not. If the idle channel is available, an instruction is fetched from a Pending command queue FIFO in the SRAM and is dispatched to the current idle channel.
4. Polling the next channel state, waiting for the next instruction to be sent. And if a response frame of finishing a certain instruction is received, writing finishing information into an instruction finishing command FIFO and waiting for software processing.
It can be seen that the existing FIFO-based I/O instruction management mechanism has the following disadvantages:
the depth of the Pending FIFO is fixed, and the concurrency of I/O is restricted by the design depth of the FIFO; the depth of the FIFO cannot be designed to be too large, since the larger the depth, the more hardware resources are required. This design can affect the performance of the overall system and is not flexible enough.
2. Because the depth of the Pending FIFO is fixed, and the number of I/Os issued by the host is far greater than the depth of the Pending FIFO, the Pending FIFO is generally close to full for a long time. In the I/O execution process, for the I/O needing rescheduling, the problem that the queue is full and cannot be written to when the Pending FIFO is written back again can be encountered, so that channel congestion is caused, and the system performance is affected.
3. Based on the first-in first-out characteristic, the FIFO mechanism cannot realize the priority processing of the I/O instruction, and is not beneficial to the flexible scheduling of the I/O. For example, the I/O, Task instructions to be rescheduled all have higher priority, but the prior scheduling of the high-priority instructions cannot be realized under the FIFO mechanism, which also affects the flexibility of the whole system.
4. For an I/O instruction which enters a Pending FIFO but needs to be aborted (Abort) by application layer software, the FIFO mechanism cannot delete the instructions from the queue in advance, and the context of the current I/O needs to be judged when the instruction is scheduled so as to know whether the instruction needs to be discarded or not. Such operations waste system overhead and affect system performance.
Disclosure of Invention
The invention aims to provide a method and a device for I/O instruction scheduling management based on a hard disk, which are used for optimizing an I/O management scheme based on FIFO and solving the problem of flexible scheduling and management of multiple I/O concurrence in a disk array system.
Referring to fig. 4, in a first aspect, the present invention provides an I/O instruction management method based on a double-linked list structure, including:
reading an I/O instruction from an I/O instruction queue in a memory;
generating an equipment linked list according to the target disk equipment of the I/O instruction, and generating an instruction linked list corresponding to the target disk equipment according to the I/O instruction;
selecting a first physical channel in an idle state by polling the physical channels;
selecting a first disk device from the device linked list;
selecting a first I/O instruction from an instruction chain table corresponding to the first disk device;
and issuing the first I/O instruction and the first disk device to the first physical channel.
Preferably, the generating a device linked list according to the target disk information of the I/O instruction and generating an instruction linked list corresponding to the target disk according to the I/O instruction further include:
if the target disk device of the current I/O instruction is not in the device linked list, inserting the current target disk device into the device linked list, and simultaneously inserting the current I/O instruction into the instruction linked list corresponding to the current target disk device;
and if the current target disk equipment is in the equipment linked list, only inserting the current I/O instruction into the instruction linked list corresponding to the current target disk equipment.
Preferably, the instruction linked list includes an instruction sending sub-linked list and a data sending sub-linked list, where the instruction sending sub-linked list is used to indicate an instruction sending request, that is, a next frame is an instruction frame, and the data sending sub-linked list is used to indicate a data sending request, that is, a next frame is a data writing frame;
the selecting a first I/O instruction from an instruction chain table corresponding to the first disk device further includes: according to a pre-configured priority, selecting a first instruction frame from the instruction sending sublink corresponding to the first disk device, or selecting a first write data frame from the data sending sublink corresponding to the first disk device.
Preferably, the inserting the current I/O instruction into the instruction chain table corresponding to the current target disk device further includes:
and determining the position for mounting the current I/O instruction to the instruction linked list according to the priority of the current I/O instruction, wherein the position comprises the head part, the tail part or the middle part of the instruction linked list.
Preferably, the first I/O instruction is a head node of an instruction chain table corresponding to the first disk device.
Referring to fig. 5, the present invention provides, in a second aspect, an I/O instruction management apparatus based on a double-linked list structure, including:
the linked list generating module is configured to read an I/O instruction from an I/O instruction queue in the memory, generate an equipment linked list according to a target disk device of the I/O instruction, and generate an instruction linked list corresponding to the target disk device according to the I/O instruction;
a channel selection module configured to select a first physical channel in an idle state by polling the physical channels;
a device selection module configured to select a first disk device from the device linked list;
the instruction selection module is configured to select a first I/O instruction from an instruction chain table corresponding to the first disk device;
and the instruction issuing selection module is configured to issue the first I/O instruction and the first disk device to the first physical channel.
Preferably, the linked list generating module is further configured to insert the current target disk device into the device linked list and insert the current I/O instruction into the instruction linked list corresponding to the current target disk device if the target disk device of the current I/O instruction is not in the device linked list; and if the current target disk equipment is in the equipment linked list, only inserting the current I/O instruction into the instruction linked list corresponding to the current target disk equipment.
Preferably, the instruction linked list includes an instruction sending sub-linked list and a data sending sub-linked list, where the instruction sending sub-linked list is used to indicate an instruction sending request, that is, a next frame is an instruction frame, and the data sending sub-linked list is used to indicate a data sending request, that is, a next frame is a data writing frame;
the instruction selection module is further configured to select a first instruction frame from the instruction sending sublink corresponding to the first disk device or select a first write data frame from the data sending sublink corresponding to the first disk device according to a preconfigured priority.
Preferably, the linked list generating module is further configured to determine, according to the priority of the current I/O instruction, a location for mounting the current I/O instruction to the instruction linked list, where the location includes a head portion, a tail portion, or a middle portion of the instruction linked list.
Preferably, the first I/O instruction is a head node of an instruction chain table corresponding to the first disk device.
The invention adopts a double-linked list mechanism to replace a FIFO mechanism to manage the I/O instruction. Compared with the prior art, the system flexibly processes the concurrency of the I/O according to the actual running state, avoids the condition that FIFO is full and causes the channel to enter the state of waiting for dispatching response, realizes dispatching of I/O instructions with different priorities, reduces the invalid overhead of the system, flexibly realizes the dispatching mechanism of the instruction frames and the data frames with different priorities, and improves the overall flexibility and the processing performance of the system.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 shows a schematic diagram of a typical disk array architecture according to the prior art.
FIG. 2 shows a schematic diagram of an I/O instruction scheduling and management scheme according to the prior art.
FIG. 3 is a diagram illustrating a double-linked list structure based on disk and I/O instructions according to an embodiment of the invention.
FIG. 4 is a flowchart illustrating a method for managing I/O instruction scheduling based on a double-linked list structure according to an embodiment of the present invention.
FIG. 5 is a block diagram of an I/O instruction schedule management apparatus based on a double-linked list structure according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As mentioned above, the FIFO-based I/O management mechanism is not good for high concurrency of I/O, and is easily caused by the resource limitation of hardware to enter a state waiting for scheduling response, thereby losing the I/O performance of the system. In view of this, the present invention mainly optimizes an I/O management scheme based on FIFO, and solves the problem of flexible scheduling and management of multiple I/O concurrencies in a disk array system, so that an I/O instruction management method and apparatus based on a double linked list are provided, so that the concurrency of I/O instructions is decoupled from hardware resources, channel utilization loss is avoided, and I/O performance of the system is improved.
Aiming at the problem of concurrent scheduling of a large number of I/O instructions in a disk array system, the invention provides a double linked List management strategy for establishing a linked List (ITCT _ List) aiming at a disk and establishing a linked List (IOST _ List) aiming at the I/O instructions. A schematic diagram of a disk I/O instruction scheduling architecture including a doubly linked list is shown in FIG. 3. The description is as follows:
in a preferred embodiment, the specific process of the above-mentioned double-chain table management policy is described as follows:
step 101, the host application layer software establishes an I/O instruction storage space in the memory.
Specifically, the host application layer software opens up a first contiguous memory space in a memory (DDR) as an I/O queue for storing I/O instructions. Preferably, the first storage space can store 4-8K instructions, which is specific to an application scenario.
In addition, the host application layer software can also open up a second continuous storage space for storing the I/O instruction completion state. Both the first and the second storage space are managed in a FIFO mechanism.
And 102, the host application layer software issues the I/O instruction to an I/O queue of the memory.
After the first memory space is ready, the host application layer software issues the I/O command to the first memory space located in the memory (DDR), i.e., the I/O queue FIFO. Since the host application layer software generates the I/O instructions in order, the I/O instructions generated by each software thread may be stored in the I/O queue in the original order. Preferably, for a system supporting multithreading, multiple software threads may each establish separate I/O queues.
And 103, reading the I/O instruction from the I/O queue FIFO, generating an equipment linked list according to the target disk information of the I/O instruction, and generating an instruction linked list according to the I/O instruction.
In particular implementations, the command execution state machine may initialize an empty linked List (ITCT _ List) for maintaining information corresponding to each disk device. Each node of the device linked list is associated with a corresponding instruction linked list. When reading I/O instruction from I/O queue FIFO in host memory (DDR) every time, the command execution state machine analyzes the target disk information of the I/O instruction, and inserts the current target disk device identifier into the device linked list. The nodes in the linked List correspond to the target disk device identifiers one to one, that is, only one node is established in the device linked List ITCT _ List for each target disk device identifier (hereinafter, also referred to as a device number ICT) associated with the I/O instruction. The creation and insertion of linked lists is well known in the art and will not be described further herein. Due to the FIFO first-in-first-out mechanism, multiple I/O instructions are inserted into the linked list in the order of the queue. For the above multiple I/O queue FIFOs supporting multithreading, when the double-linked list is generated, the double-linked list can be inserted through a polling operation. Waiting for dispatch execution.
In particular, as shown in fig. 3, a specific implementation manner of associating the device linked list with the instruction linked list is that the device linked list node is associated with a link head of the instruction linked list. If the current device is not in the device linked list when reading a certain I/O instruction, inserting the current I/O instruction into the instruction linked list corresponding to the current target disk device while inserting the current target disk device into the device linked list; if the current target disk device is found to be in the device linked list when a certain I/O instruction is read, mounting the read I/O instruction into the instruction linked list under the current disk device, for example, adding the read I/O instruction into the chain tail of the instruction linked list through the insertion operation of the linked list.
Different from the fixed size of the FIFO queue, the invention adopts a chain table structure with variable length to store the I/O instruction, theoretically supports the insertion of any number of I/O instructions into the chain table, and the chain table can be empty when no instruction to be dispatched exists. Therefore, the memory space is reasonably and efficiently utilized, more importantly, the condition that the FIFO is full can be avoided, the channel often enters a state of waiting for scheduling response, and the concurrent processing performance of the disk array system is improved.
As a further example, the instruction chain table is divided into 2 transmission sublink tables, which are an instruction transmission sublink table and a data transmission sublink table, respectively, and when the instruction transmission sublink table node is accessed, it indicates that the next frame to be transmitted is an instruction frame, and when the data transmission sublink table node is accessed, it indicates that the next frame to be transmitted is a write data frame. The priority of the instruction linked list and the data linked list may be preconfigured. Generally, a data linked list has a higher priority than an instruction linked list, i.e., data is sent prior to an instruction, since data must be dependent on some instruction that has been previously sent. Further, for high priority instructions, it may be preconfigured to send the high priority instructions in preference to the data. In a specific priority determination manner, the head of the instruction linked list and the head of the data linked list may be preferably read at the same time, and if the priorities of the two head nodes are determined, the link fetching operation of the corresponding sub-link list with higher priority is executed.
By adopting the mode of separating the data frame chain and the instruction frame chain, the dispatching of the instruction frame and the data frame can be flexibly realized according to different priority requirements, and meanwhile, the addressing time of the linked list is reduced.
With the hierarchical linked list structure of the above embodiment, when traversing the double linked list, the command execution state machine first locates a certain target disk device of the device linked list, and then addresses the head of the instruction linked list belonging to the target disk device. Because a plurality of I/O instructions stored in the instruction linked list all target the current disk device, each linked list node in the instruction linked list is indirectly positioned, and the linked list nodes comprise a data sending sub-chain list node and an instruction sending sub-chain list node, namely, a plurality of I/O instructions to be dispatched, which are associated with the current disk device, are traversed.
And 104, selecting a first physical channel in an idle state through polling the physical channels.
Because a plurality of physical channels PHY exist in a wide port scene, the command execution state machine polls all the physical channels, judges the busy-idle state of each channel, and selects a strategy that the sending channel is not idle as much as possible. Namely, if the current channel is polled to be in an idle state, the current channel is determined to be the first physical channel to be transmitted.
And 105, selecting a first disk device in the device chain table, and executing a sending task of the first disk device.
The selection of a Device is essentially a selection of a Device link List (ITCT _ List) node, and the command execution state machine selects one Device from the ITCT _ List, thereby performing the Device's transmission task. Preferably, when the disk device is selected, the device of the head node is selected as the first disk device. That is, when performing disk scheduling, a search is started from the head of the chain, and the first disk is selected as the first disk device. Compared with the traditional FIFO structure, the device linked list can avoid invalid instruction scheduling caused by the busy disk.
In an alternative embodiment, to ensure the balance of concurrent I/O of the devices, the selection policy between ITCT _ lists is RR, i.e. a round-robin algorithm, that is, in the direction of the linked List, the devices (devices) respond to their sending requests sequentially.
In another alternative embodiment, based on the RR policy, the disk device is selected based on the busy-idle status of the current device, i.e. whether a disk is performing data transceiving operation in the link establishment status of other channels. Searching from the chain head, and if the disk state is idle, selecting the current equipment as first disk equipment; if the time slice is busy, judging the next disk device until finding a free disk, namely the first disk device which is taken as the current time slice.
More specifically, the busy-idle states of all the disks may be maintained in the form of a bitmap, that is, a 1-bit busy-idle state flag bit is established for each disk, if the current disk is in a link successful establishment state and is performing data transceiving operation, it indicates that the disk is in a busy state, otherwise, it is in an idle state.
And 106, selecting a first I/O instruction in the instruction chain list.
Similarly, the selection of an I/O instruction is essentially the selection of an I/O instruction linked List (IOST _ List) node, since the data sending sublink of a particular device and the link head of the instruction sending sublink are reserved within the device linked List ITCT _ List, the link head of the I/O instruction linked List IOST _ List can be addressed after the device linked List ITCT _ List node is selected, thereby determining the I/O instruction to be processed. Preferably, to achieve in-order scheduling, the first I/O instruction is always the I/O instruction of the head node of the I/O instruction linked list.
In addition, pending I/O instructions may be mapped to one of two sublink tables to enable selection between data and instructions. When there is both an instruction sending request (instruction sending sublink list) and a data sending request (data sending sublink list) in the to-be-sent request I/O instruction linked list of a certain disk Device (Device), step 106 may further include selecting one of the instruction sending sublink list and the data sending sublink list to respond.
In one embodiment, the priority level may be set according to the configuration of the software, for example, the priority level may be set according to the instruction sending request under certain specific conditions, and the priority level may be set according to the data sending request under certain other conditions. The specific priority setting manner can be made by those skilled in the art according to the actual state of the system to make a corresponding dispatching strategy.
And 107, sending the selected first I/O command and the first disk device to the selected first physical channel.
In the I/O instruction queue FIFO, different I/O instructions are uniquely identified by IPTT (initiator port transfer tag) numbers. And different disk devices are uniquely identified by the disk device number ICT. Therefore, the command execution state machine can issue the I/O instruction number IPTT, the disk device number ICT and the related information to the selected physical channel.
As a preferred embodiment, the step 107 further includes reading the IOST and ITCT context information of the selected I/O and sending it to the selected channel, so that the channel controls the sending attribute of the related Command (Command) according to the context information.
Furthermore, after performing a relevant sending operation of a certain I/O instruction on the selected channel, the command execution state machine needs to decide the subsequent operation according to the feedback of the channel. Therefore, the method may further comprise:
and 108, when receiving the instruction frame returned from the physical channel, if the I/O instruction needs to be re-dispatched, re-adding the returned instruction into the I/O instruction linked list under the corresponding disk device.
Specifically, if the successfully dispatched instruction does not need the state machine to execute the retransmission, the operation of the digest chain is executed, and the I/O instruction is deleted from the linked list. However, if an instruction frame returned from a certain channel is received, the command execution state machine acquires context information corresponding to the I/O instruction according to the IPTT number corresponding to the instruction frame, so as to execute corresponding operations according to the content of the current instruction frame, for example, according to the type of returned data, determine whether to add the I/O instruction to the I/O instruction linked list again, for example, insert the tail of the chain, and wait for next dispatch.
If the I/O command fails to issue or a response is received from the disk requesting the host to perform a write data operation, the I/O needs to be re-dispatched or re-enqueued automatically by the command/state machine. For example, upon receipt of an XFER _ RDY frame of an SSP write command, it is determined that the transmission of the completed write data frame needs to be re-dispatched. At this time, the IPTT number of the current I/O instruction is rewritten into the I/O instruction linked list corresponding to the relevant disk device.
As an alternative embodiment, in step 103, the mounting of the current I/O instruction to the instruction chain table under the current disk device may further determine, according to the priority level of the I/O instruction, a specific position of the instruction chain table to which the current I/O instruction is mounted. As is known, the insertion operation of a linked list includes an insertion at the head of the chain, an insertion at the tail of the chain, and an insertion in the middle of the linked list. For example, for an I/O instruction with high priority, the head position of the chain of the I/O instruction chain table can be directly inserted, and the instruction is preferentially dispatched according to the preset sequence when the next dispatching is ensured. Therefore, for an I/O instruction with any priority, the command execution state machine can select the position where the I/O instruction chain list is inserted according to the time urgency degree of the instruction needing to be dispatched next time. By scheduling different I/O instruction priorities, the flexibility of system management instructions is improved.
As an optional embodiment, after step 103, if the host application layer software needs to suspend a certain I/O instruction (Abort), the command execution state machine obtains an IPTT number and a target disk number of the target I/O instruction, searches the device chain table, finds a corresponding disk, then searches the I/O instruction chain table under the disk device, finds an instruction corresponding to the IPTT number, and executes a digest link operation, that is, the Abort operation of the I/O instruction can be completed in advance, without affecting the issuing and receiving of other services of the link. The node deletion operation of the linked list is well known in the art and will not be described in detail herein. Obviously, the linked list supports deletion of any node, but is not limited to first-in first-out in-order deletion, and compared with the traditional FIFO mechanism, the method reduces invalid overhead of the system and improves the flexibility of instruction management.
Optionally, after step 103, if the host application layer software needs to suspend all I/O instructions of a certain disk device (Abort), the corresponding disk device node is directly deleted in the device chain table. For example, the system sometimes has a hardware fault problem of a certain device, and at this time, the command execution state machine needs the I/O instruction of the whole Abort device. Because the I/O instructions of the current equipment are all stored in the instruction linked list under the current equipment, and the instruction linked list must be addressed by the corresponding equipment nodes in the equipment linked list, the instruction linked list nodes under the corresponding equipment do not need to be deleted one by one, and compared with the traditional FIFO mechanism, the instruction management efficiency is improved.
It should be noted that the overall hardware architecture for I/O instruction scheduling shown in fig. 3 is only used for illustrating and not limiting the technical solution of the present invention. It should be understood by those skilled in the art that any adjustment may be made to the structural relationship of the software and hardware modules, the format of the instruction frame, the number of instruction FIFOs, the number of disk devices, the number of physical channels, etc. according to actual needs on the basis of the present invention, and the present invention should not be limited to the specific structures and parameters of the above examples.
By the method, the concurrent scheduling scheme of a large number of I/O instructions in the disk array system is optimized, and the I/O efficiency of the whole system is improved. Its advantages mainly include the following aspects:
1. because the double linked list mechanism is adopted to manage the I/O instruction, the system can flexibly process the concurrency of the I/O according to the actual running state, avoid the condition that the FIFO is full and cause the channel to frequently enter the state of waiting for dispatching response, and improve the integral I/O performance of the system.
2. Because the double linked list mechanism is adopted to manage the I/O instruction, compared with the FIFO mechanism, the dispatching of different I/O instruction priorities can be realized, and the flexibility of the system is improved.
3. Due to the adoption of a double linked list mechanism, the management of software execution I/O instructions is facilitated, for example, target instructions can be quickly deleted for operations such as Abort and the like, the invalid overhead of a system is reduced, the system performance is improved, and the power consumption is saved.
4. The I/O instruction linked list of the invention adopts a mode of separating a data frame chain from an instruction frame chain, can flexibly realize the dispatch of different priorities of the instruction frame and the data frame, and simultaneously reduces the addressing time of the linked list.
According to another aspect of the present invention, there is accordingly provided an I/O instruction management apparatus based on a double linked list structure. The apparatus may be embodied by a command execution state machine. It includes:
the linked list generating module is used for reading an I/O instruction from the memory I/O queue FIFO, generating an equipment linked list according to the target disk information of the I/O instruction and generating an instruction linked list according to the I/O instruction;
the channel selection module is used for polling the physical channels and selecting a first physical channel in an idle state;
the device selection module is used for selecting a first disk device in the device linked list and executing a sending task of the first disk device;
the instruction selection module is used for selecting a first I/O instruction in the I/O instruction linked list;
and the instruction issuing module is used for issuing the selected first I/O instruction and the first disk device to the selected first physical channel.
In addition, before the linked list generating module reads the I/O instruction from the memory I/O queue FIFO, the host application layer establishes an I/O instruction queue in the memory in advance and issues the I/O instruction to the I/O queue of the memory;
wherein the host application layer opens up a first contiguous memory space in a memory (DDR) as an I/O queue for storing I/O instructions. Preferably, the first storage space can store 4-8K instructions, which is specific to an application scenario. In addition, the host application layer software can also open up a second continuous storage space for storing the I/O instruction completion state. Both the first and the second storage space are managed in a FIFO mechanism. After the first memory space is ready, the host application layer software issues the I/O command to the first memory space located in the memory (DDR), i.e., the I/O queue FIFO.
In a specific implementation, the linked List generating module of the command execution state machine may initialize an empty linked List (ITCT _ List) for maintaining information corresponding to each disk device. The device linked list is associated with an instruction linked list. When an I/O instruction is read from an I/O queue FIFO in a host memory (DDR) every time, target disk information of the I/O instruction is analyzed, and a current target disk device identifier is inserted into a device linked list. The nodes in the linked List correspond to the target disk device identifiers one to one, that is, only one node is established in the device linked List ITCT _ List for each target disk device identifier (hereinafter, also referred to as a device number ICT) associated with the I/O instruction. The creation and insertion of linked lists is well known in the art and will not be described further herein.
As shown in fig. 3, a specific implementation manner of associating the device linked list with the instruction linked list is that the device linked list node is associated with a link head of the instruction linked list. If the current device is not in the device linked list when reading a certain I/O instruction, inserting the current I/O instruction into the instruction linked list corresponding to the current target disk device while inserting the current target disk device into the device linked list; if the current target disk device is found to be in the device linked list when a certain I/O instruction is read, mounting the read I/O instruction into the instruction linked list under the current disk device, for example, adding the read I/O instruction into the chain tail of the instruction linked list through the insertion operation of the linked list.
Different from the fixed size of the FIFO queue, the invention adopts a chain table structure with variable length to store the I/O instruction, theoretically supports the insertion of any number of I/O instructions into the chain table, and the chain table can be empty when no instruction to be dispatched exists. Therefore, the memory space is reasonably and efficiently utilized, more importantly, the condition that the FIFO is full can be avoided, the channel often enters a state of waiting for scheduling response, and the concurrent processing performance of the disk array system is improved.
As a further example, the linked list generating module of the command execution state machine is further configured to divide the instruction linked list into 2 transmission sub-linked lists, which are the instruction transmission sub-linked list and the data transmission sub-linked list, respectively, and when the instruction transmission sub-linked list node is accessed, it indicates that the next frame to be sent is an instruction frame, and when the data transmission sub-linked list node is accessed, it indicates that the next frame to be sent is a write data frame. By adopting the mode of separating the data frame chain and the instruction frame chain, the dispatching of the instruction frame and the data frame can be flexibly realized according to different priority requirements, and meanwhile, the addressing time of the linked list is reduced.
With the hierarchical linked list structure of the above embodiment, when traversing the double linked list, a certain target disk device of the device linked list is first located, and then the head of the instruction linked list belonging to the target disk device is addressed. Because a plurality of I/O instructions stored in the instruction linked list all target the current disk device, each linked list node in the instruction linked list is indirectly positioned, and the linked list nodes comprise a data sending sub-chain list node and an instruction sending sub-chain list node, namely, a plurality of I/O instructions to be dispatched, which are associated with the current disk device, are traversed.
For polling of the physical channels, because a plurality of physical channels PHY exist in a wide port scenario, the channel selection module of the command execution state machine is configured to poll all the physical channels, determine the busy-idle state of each channel, and select a policy that the sending channel is not idle as much as possible. Namely, if the current channel is in an idle state by polling, the channel selection module determines the current channel as a first physical channel to be transmitted.
The selection of the first disk Device is essentially a selection of a Device link List (ITCT _ List) node, and one Device is selected from the ITCT _ List, thereby performing the sending task of the Device. Preferably, to ensure the balance of concurrent I/O of the respective devices to a certain extent, the selection policy between ITCT _ List is RR round robin algorithm, i.e. the Device selection module of the command execution state machine is configured to respond to the transmission request of the respective devices (Device) in sequence along the direction of the linked List.
Similarly, the selection of the first I/O instruction is essentially a selection of an I/O instruction linked List (IOST _ List) node, and since the data sending sublist of a particular device and the head of the instruction sending sublist are maintained within the device linked List ITCT _ List, the head of the I/O instruction linked List IOST _ List can be addressed after the device selection module selects the device linked List ITCT _ List node, thereby enabling the instruction selection module to determine the I/O instruction to be processed. Preferably, to achieve in-order scheduling, the first I/O instruction is always the I/O instruction of the chain head node.
In addition, pending I/O instructions may be mapped to one of two sublink tables to enable selection between data and instructions. When a request to be sent I/O instruction linked list of a certain disk Device (Device) has both an instruction sending request (instruction sending sublink list) and a data sending request (data sending sublink list), the instruction selection module of the command execution state machine is further configured to select one of the instruction sending sublink list and the data sending sublink list for response. In a specific embodiment, the priority may be set according to the configuration of the software, for example, the priority is given to responding to the instruction sending request under certain specific conditions, and the priority is given to responding to the data sending request under certain other conditions. The specific priority setting manner can be made by those skilled in the art according to the actual state of the system to make a corresponding dispatching strategy.
In the Pending command queue FIFO, different I/O instructions are uniquely identified by an iptt (initiator port transfer tag) number. And different disk devices are uniquely identified by the disk device number ICT. Therefore, in the actual instruction dispatching stage, the instruction issuing module of the instruction execution state machine is configured to issue the I/O instruction number IPTT, the disk device number ICT and the related information to the selected physical channel.
As a preferred embodiment, the instruction issuing module of the Command execution state machine is further configured to read the IOST and ITCT context information of the selected I/O and issue the IOST and ITCT context information to the selected channel, so that the channel can control the sending attribute of the related instruction (Command) according to the context information.
In addition, after a certain I/O instruction related transmission operation is performed on the selected channel, a subsequent operation needs to be determined according to the feedback of the channel. The command execution state machine therefore further comprises an instruction return module configured to:
when receiving the instruction frame returned from the physical channel, if the I/O instruction needs to be re-distributed, the I/O instruction is added into the I/O instruction linked list under the corresponding disk device again.
Specifically, if an instruction frame returned from a certain channel is received, context information corresponding to the I/O instruction is obtained according to an IPTT number corresponding to the instruction frame, and corresponding operation is executed according to the content of the current instruction frame, for example, according to the type of returned data, it is determined whether to add the I/O instruction again to an I/O instruction linked list, for example, insert a chain tail, and wait for next dispatch.
For example, upon receipt of an XFER _ RDY frame of an SSP write command, it is determined that the transmission of the completed write data frame needs to be re-dispatched. At this time, the IPTT number of the current I/O instruction is rewritten into the I/O instruction linked list corresponding to the relevant disk device.
As an optional embodiment, the linked list generating module of the command execution state machine is further configured to determine, when the current I/O instruction is mounted in the instruction linked list of the current disk device, a specific position where the current I/O instruction is mounted in the instruction linked list according to the priority level of the I/O instruction. For example, for an I/O instruction with high priority, the head position of the chain of the I/O instruction chain table can be directly inserted, and the instruction is preferentially dispatched according to the preset sequence when the next dispatching is ensured. Therefore, for an I/O instruction with any priority, the linked list generating module can select the position where the I/O instruction linked list is inserted according to the time urgency degree of the instruction to be dispatched next time. By scheduling different I/O instruction priorities, the flexibility of system management instructions is improved.
As an optional embodiment, after the command execution state machine further includes an instruction suspension module, after generating a device chain table according to target disk information of an I/O instruction and generating an instruction chain table according to the I/O instruction, if a host application layer software needs to suspend a certain I/O instruction (Abort), the instruction suspension module is configured to obtain an IPTT number and a target disk number of the target I/O instruction, search the device chain table, find a corresponding target disk number, then search the I/O instruction chain table under the disk device, find an instruction corresponding to the IPTT number, and execute a link-dropping operation, that is, the Abort operation of the I/O instruction can be completed in advance without affecting the issuing and receiving of other services of a link. Obviously, the linked list supports deletion of any node, and compared with the traditional FIFO mechanism, the method reduces the invalid overhead of the system and improves the flexibility of instruction management.
Optionally, after the linked list generating module of the command execution state machine generates the device linked list according to the target disk information of the I/O instruction and generates the instruction linked list according to the I/O instruction, if the host application layer software needs to suspend all I/O instructions of a certain disk device (Abort), the instruction suspending module is further configured to directly delete the corresponding disk device node in the device linked list. For example, the system sometimes has a hardware fault problem of a certain device, and at this time, an I/O instruction of the whole device of an Abort is needed. Because the I/O instructions of the current equipment are all stored in the instruction linked list under the current equipment, and the instruction linked list must be addressed by the corresponding equipment nodes in the equipment linked list, the instruction linked list nodes under the corresponding equipment do not need to be deleted one by one, and compared with the traditional FIFO mechanism, the instruction management efficiency is improved.
Further, those skilled in the art will appreciate that the architectural diagram shown in FIG. 3 does not constitute a limitation of the disk array architecture of the present invention, but may include more or less components, or some combination of components, as is known in the art.
Compared with the prior art, the I/O instruction management device based on the double linked list structure can flexibly process the concurrency of I/O according to the actual running state, avoids the condition that FIFO is full and causes a channel to enter a state waiting for scheduling response, realizes scheduling of I/O instructions with different priorities, reduces the invalid overhead of a system, flexibly realizes a dispatching mechanism of instruction frames and data frames with different priorities, and improves the overall flexibility and processing performance of the system.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. An I/O instruction management method based on a double linked list structure is characterized by comprising the following steps:
reading an I/O instruction from an I/O instruction queue in a memory;
generating an equipment linked list according to the target disk equipment of the I/O instruction, and generating an instruction linked list corresponding to the target disk equipment according to the I/O instruction;
selecting a first physical channel in an idle state by polling the physical channels;
selecting a first disk device from the device linked list;
selecting a first I/O instruction from an instruction chain table corresponding to the first disk device;
and issuing the first I/O instruction and the first disk device to the first physical channel.
2. The I/O command management method based on the double linked list structure according to claim 1, wherein the generating of the device linked list according to the target disk information of the I/O command and the generating of the command linked list corresponding to the target disk according to the I/O command further comprises:
if the target disk device of the current I/O instruction is not in the device linked list, inserting the current target disk device into the device linked list, and simultaneously inserting the current I/O instruction into the instruction linked list corresponding to the current target disk device;
and if the current target disk equipment is in the equipment linked list, only inserting the current I/O instruction into the instruction linked list corresponding to the current target disk equipment.
3. The I/O instruction management method based on the double linked list structure according to claim 1, wherein the instruction linked list comprises an instruction sending sublink table and a data sending sublink table, wherein the instruction sending sublink table is used for indicating an instruction sending request, that is, a next frame is an instruction frame, and the data sending sublink table is used for indicating a data sending request, that is, a next frame is a write data frame;
the selecting a first I/O instruction from an instruction chain table corresponding to the first disk device further includes: according to a pre-configured priority, selecting a first instruction frame from the instruction sending sublink corresponding to the first disk device, or selecting a first write data frame from the data sending sublink corresponding to the first disk device.
4. The I/O instruction management method based on the double linked list structure according to claim 1, wherein the inserting the current I/O instruction into the instruction linked list corresponding to the current target disk device further comprises:
and determining the position for mounting the current I/O instruction to the instruction linked list according to the priority of the current I/O instruction, wherein the position comprises the head part, the tail part or the middle part of the instruction linked list.
5. The method according to claim 4, wherein the first I/O instruction is a head node of an instruction chain table corresponding to the first disk device.
6. An I/O instruction management apparatus based on a double linked list structure, comprising:
the linked list generating module is configured to read an I/O instruction from an I/O instruction queue in the memory, generate an equipment linked list according to a target disk device of the I/O instruction, and generate an instruction linked list corresponding to the target disk device according to the I/O instruction;
a channel selection module configured to select a first physical channel in an idle state by polling the physical channels;
a device selection module configured to select a first disk device from the device linked list;
the instruction selection module is configured to select a first I/O instruction from an instruction chain table corresponding to the first disk device;
and the instruction issuing selection module is configured to issue the first I/O instruction and the first disk device to the first physical channel.
7. The double linked list structure-based I/O instruction management apparatus of claim 6,
the linked list generating module is further configured to insert the current target disk device into the device linked list and simultaneously insert the current I/O instruction into the instruction linked list corresponding to the current target disk device if the target disk device of the current I/O instruction is not in the device linked list; and if the current target disk equipment is in the equipment linked list, only inserting the current I/O instruction into the instruction linked list corresponding to the current target disk equipment.
8. The I/O instruction management device based on the double-linked list structure as claimed in claim 6, wherein the instruction linked list comprises an instruction sending sublink table and a data sending sublink table, wherein the instruction sending sublink table is used for indicating an instruction sending request, that is, a next frame is an instruction frame, and the data sending sublink table is used for indicating a data sending request, that is, a next frame is a write data frame;
the instruction selection module is further configured to select a first instruction frame from the instruction sending sublink corresponding to the first disk device or select a first write data frame from the data sending sublink corresponding to the first disk device according to a preconfigured priority.
9. The double linked list structure-based I/O instruction management apparatus of claim 6,
the linked list generation module is further configured to determine a location to mount the current I/O instruction to the instruction linked list according to a priority of the current I/O instruction, the location including a head, a tail, or a middle of the instruction linked list.
10. The I/O instruction management apparatus based on a double linked list structure as claimed in claim 9, wherein the first I/O instruction is a head node of an instruction linked list corresponding to the first disk device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011414847.6A CN112463064B (en) | 2020-12-07 | 2020-12-07 | I/O instruction management method and device based on double linked list structure |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011414847.6A CN112463064B (en) | 2020-12-07 | 2020-12-07 | I/O instruction management method and device based on double linked list structure |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112463064A true CN112463064A (en) | 2021-03-09 |
CN112463064B CN112463064B (en) | 2022-02-08 |
Family
ID=74801159
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011414847.6A Active CN112463064B (en) | 2020-12-07 | 2020-12-07 | I/O instruction management method and device based on double linked list structure |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112463064B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116360675A (en) * | 2022-11-29 | 2023-06-30 | 无锡众星微系统技术有限公司 | SAS frame routing method and device in wide port scene |
CN117389733A (en) * | 2023-10-25 | 2024-01-12 | 无锡众星微系统技术有限公司 | SAS I/O scheduling method and device for reducing switch chain overhead |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN86103678A (en) * | 1985-06-28 | 1986-12-31 | 惠普公司 | Be used for providing the device of I/O notice to processor |
CN101241492A (en) * | 2007-02-06 | 2008-08-13 | 中兴通讯股份有限公司 | EMS memory data storage apparatus possessing capacity dynamic control function and its accomplishing method |
US20130007300A1 (en) * | 2011-06-30 | 2013-01-03 | International Business Machines Corporation | Facilitating transport mode input/output operations between a channel subsystem and input/output devices |
CN103823636A (en) * | 2012-11-19 | 2014-05-28 | 苏州捷泰科信息技术有限公司 | IO scheduling method and device |
CN105388861A (en) * | 2015-09-25 | 2016-03-09 | 深圳一电航空技术有限公司 | Method and system for controlling devices in internet of things |
CN108595282A (en) * | 2018-05-02 | 2018-09-28 | 广州市巨硅信息科技有限公司 | A kind of implementation method of high concurrent message queue |
CN111158601A (en) * | 2019-12-30 | 2020-05-15 | 北京浪潮数据技术有限公司 | IO data flushing method, system and related device in cache |
US20200228625A1 (en) * | 2019-01-11 | 2020-07-16 | EMC IP Holding Company LLC | Slo i/o delay prediction |
US10740028B1 (en) * | 2017-08-30 | 2020-08-11 | Datacore Software Corporation | Methods and apparatus for LRU buffer management in performing parallel IO operations |
-
2020
- 2020-12-07 CN CN202011414847.6A patent/CN112463064B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN86103678A (en) * | 1985-06-28 | 1986-12-31 | 惠普公司 | Be used for providing the device of I/O notice to processor |
CN101241492A (en) * | 2007-02-06 | 2008-08-13 | 中兴通讯股份有限公司 | EMS memory data storage apparatus possessing capacity dynamic control function and its accomplishing method |
US20130007300A1 (en) * | 2011-06-30 | 2013-01-03 | International Business Machines Corporation | Facilitating transport mode input/output operations between a channel subsystem and input/output devices |
CN103823636A (en) * | 2012-11-19 | 2014-05-28 | 苏州捷泰科信息技术有限公司 | IO scheduling method and device |
CN105388861A (en) * | 2015-09-25 | 2016-03-09 | 深圳一电航空技术有限公司 | Method and system for controlling devices in internet of things |
US10740028B1 (en) * | 2017-08-30 | 2020-08-11 | Datacore Software Corporation | Methods and apparatus for LRU buffer management in performing parallel IO operations |
CN108595282A (en) * | 2018-05-02 | 2018-09-28 | 广州市巨硅信息科技有限公司 | A kind of implementation method of high concurrent message queue |
US20200228625A1 (en) * | 2019-01-11 | 2020-07-16 | EMC IP Holding Company LLC | Slo i/o delay prediction |
CN111158601A (en) * | 2019-12-30 | 2020-05-15 | 北京浪潮数据技术有限公司 | IO data flushing method, system and related device in cache |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116360675A (en) * | 2022-11-29 | 2023-06-30 | 无锡众星微系统技术有限公司 | SAS frame routing method and device in wide port scene |
CN116360675B (en) * | 2022-11-29 | 2023-10-24 | 无锡众星微系统技术有限公司 | SAS frame routing method and device in wide port scene |
CN117389733A (en) * | 2023-10-25 | 2024-01-12 | 无锡众星微系统技术有限公司 | SAS I/O scheduling method and device for reducing switch chain overhead |
CN117389733B (en) * | 2023-10-25 | 2024-04-26 | 无锡众星微系统技术有限公司 | SAS I/O scheduling method and device for reducing switch chain overhead |
Also Published As
Publication number | Publication date |
---|---|
CN112463064B (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7808999B2 (en) | Method and apparatus for out-of-order processing of packets using linked lists | |
US8307170B2 (en) | Information processing method and system | |
US5687372A (en) | Customer information control system and method in a loosely coupled parallel processing environment | |
US9841913B2 (en) | System and method for enabling high read rates to data element lists | |
CN112463064B (en) | I/O instruction management method and device based on double linked list structure | |
US6633954B1 (en) | Method for enhancing host application performance with a DASD using task priorities | |
US7234004B2 (en) | Method, apparatus and program product for low latency I/O adapter queuing in a computer system | |
US20090319634A1 (en) | Mechanism for enabling memory transactions to be conducted across a lossy network | |
JPH08241263A (en) | I/o request processing computer system and processing method | |
CN111221759B (en) | Data processing system and method based on DMA | |
US6636951B1 (en) | Data storage system, data relocation method and recording medium | |
US20090070560A1 (en) | Method and Apparatus for Accelerating the Access of a Multi-Core System to Critical Resources | |
EP1554644A2 (en) | Method and system for tcp/ip using generic buffers for non-posting tcp applications | |
EP0747813A2 (en) | Customer information control system and method with temporary storage queuing functions in a loosely coupled parallel processing environment | |
CN112506431B (en) | I/O instruction scheduling method and device based on disk device attributes | |
EP0747814A1 (en) | Customer information control system and method with transaction serialization control functions in a loosely coupled parallel processing environment | |
US10318362B2 (en) | Information processing apparatus, information processing method, and non-transitory computer-readable storage medium | |
US20050076177A1 (en) | Storage device control unit and method of controlling the same | |
US5630133A (en) | Customer information control system and method with API start and cancel transaction functions in a loosely coupled parallel processing environment | |
US20220222013A1 (en) | Scheduling storage system tasks to promote low latency and sustainability | |
US20180309687A1 (en) | Information processing apparatus, information processing method, and non-transitory computer-readable storage medium | |
US6108694A (en) | Memory disk sharing method and its implementing apparatus | |
US9665519B2 (en) | Using a credits available value in determining whether to issue a PPI allocation request to a packet engine | |
CN116483258A (en) | System comprising a storage device | |
CN111858418B (en) | Memory communication method and device based on remote direct memory access RDMA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |