CN112463028A - I/O processing method, system, equipment and computer readable storage medium - Google Patents

I/O processing method, system, equipment and computer readable storage medium Download PDF

Info

Publication number
CN112463028A
CN112463028A CN202011184858.XA CN202011184858A CN112463028A CN 112463028 A CN112463028 A CN 112463028A CN 202011184858 A CN202011184858 A CN 202011184858A CN 112463028 A CN112463028 A CN 112463028A
Authority
CN
China
Prior art keywords
cpu
processed
resource amount
storage system
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011184858.XA
Other languages
Chinese (zh)
Other versions
CN112463028B (en
Inventor
张雪庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202011184858.XA priority Critical patent/CN112463028B/en
Publication of CN112463028A publication Critical patent/CN112463028A/en
Application granted granted Critical
Publication of CN112463028B publication Critical patent/CN112463028B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The application discloses an I/O processing method, a system, equipment and a computer medium, which are applied to a block layer of a storage system and used for determining the available resource quantity of a CPU (Central processing Unit) in the storage system; acquiring I/O to be processed; determining the amount of CPU resources required for processing I/O to be processed; judging whether the CPU resource amount exceeds the available resource amount; if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and allocating the target I/O to the target software context distribution queue for processing. According to the method and the device, the frequency of the used CPU can be increased to generate the target software context distribution queue to process the I/O to be processed, the processing efficiency of the I/O to be processed can be improved, and I/O blockage in a storage system can be avoided.

Description

I/O processing method, system, equipment and computer readable storage medium
Technical Field
The present application relates to the field of storage technologies, and in particular, to an I/O processing method, system, device, and computer-readable storage medium.
Background
In a storage system, when a user initiates a read-write operation, the user does not directly operate a storage device, but needs to complete the read-write of data through a long IO (Input/Output) stack. The read-write operation generally needs to sequentially pass through a Virtual File system (vfs), a disk File system, a block layer and an equipment driver layer, and finally reaches a storage device, and an interrupt notification driver is sent after the device processing is completed. In the current architecture, a software queue (soft context dispatch queue) is allocated to each Central Processing Unit (CPU), and a hardware context dispatch queue (hard context dispatch queue) of the same number is allocated according to a hardware queue (hard ware queue) of a memory, and 1 or more software queues are mapped to 1 hardware queue through a fixed mapping relation, and then are corresponding to the hardware queue of the memory. When the processing task of a certain I/O process is suddenly increased, I/O is accumulated in a corresponding software queue, and at the moment, no additional CPU resource is used for processing the increased I/O, partial I/O timeout is caused, the performance of the whole system is seriously influenced, and the performance bottleneck is formed.
In summary, those skilled in the art need to solve the above-mentioned problems and problems to improve the I/O processing efficiency of a storage system.
Disclosure of Invention
The application aims to provide an I/O processing method which can solve the technical problem of improving the I/O processing efficiency of a storage system to a certain extent. The application also provides an I/O processing system, equipment and a computer readable storage medium.
In order to achieve the above purpose, the present application provides the following technical solutions:
an I/O processing method applied to a block layer of a storage system comprises the following steps:
determining the available resource amount of the used CPU in the storage system;
acquiring I/O to be processed;
determining the amount of CPU resources required for processing the I/O to be processed;
judging whether the CPU resource amount exceeds the available resource amount;
if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and distributing the target I/O to the target software context distribution queue for processing.
Preferably, after determining whether the amount of the CPU resource exceeds the amount of the available resource, the method further includes:
if the CPU resource amount does not exceed the available resource amount, reducing the frequency of the used CPU, deleting the existing software context distribution queue corresponding to the frequency reduction value of the used CPU, and canceling the mapping relation between the existing software context distribution queue and the hardware context distribution queue.
Preferably, the determining the available resource amount of the used CPU in the storage system includes:
acquiring performance configuration information of the used CPU;
determining the amount of available resources of the used CPU based on the performance configuration information.
Preferably, the acquiring the performance configuration information of the used CPU includes:
and acquiring the performance configuration information of the used CPU through a driving layer of the storage system.
Preferably, the increasing the frequency of the used CPU includes:
sending a frequency increase instruction to the unused CPU through a driver layer of the storage system to increase the frequency of the used CPU.
Preferably, after increasing the frequency of the used CPU, the method further includes:
and updating the performance parameters of the used CPU.
Preferably, the determining the amount of CPU resources required for processing the I/O to be processed includes:
acquiring a target factor of the I/O to be processed, wherein the type of the target factor comprises a bandwidth value, a time delay value and a congestion value;
determining the amount of CPU resources based on the target factor.
An I/O processing system applied to a block layer of a storage system, comprising:
the first determining module is used for determining the available resource amount of the used CPU in the storage system;
the first acquisition module is used for acquiring I/O to be processed;
a second determining module, configured to determine an amount of CPU resources required for processing the to-be-processed I/O;
the first judging module is used for judging whether the CPU resource amount exceeds the available resource amount; if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and distributing the target I/O to the target software context distribution queue for processing.
An I/O processing device comprising:
a memory for storing a computer program;
a processor for implementing the steps of the I/O processing method as described above when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the I/O processing method as set forth in any one of the preceding claims.
The I/O processing method is applied to a block layer of a storage system and used for determining the available resource amount of a CPU (Central processing Unit) in the storage system; acquiring I/O to be processed; determining the amount of CPU resources required for processing I/O to be processed; judging whether the CPU resource amount exceeds the available resource amount; if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and allocating the target I/O to the target software context distribution queue for processing. In the application, the block layer of the storage system can determine the amount of the CPU resource required by the I/O to be processed, and can increase the frequency of the used CPU under the condition that the amount of the CPU resource exceeds the available amount of the used CPU, and generate the target software context distribution queue to process the I/O to be processed based on the frequency increase value of the used CPU, so that the processing efficiency of the I/O to be processed can be improved, and I/O blockage in the storage system can be avoided. The I/O processing system, the equipment and the computer readable storage medium provided by the application also solve the corresponding technical problems.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an I/O processing method according to an embodiment of the present application;
FIG. 2 is a second flowchart of an I/O processing method according to an embodiment of the present application;
FIG. 3 is a block diagram of an I/O processing system according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an I/O processing system coupled to a storage system as provided herein;
fig. 5 is a schematic structural diagram of an I/O processing device according to an embodiment of the present application;
fig. 6 is another schematic structural diagram of an I/O processing device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a storage system, when a user initiates a read-write operation, the user does not directly operate a storage device, but needs to complete the read-write of data through a long IO (Input/Output) stack. The read-write operation generally needs to sequentially pass through a Virtual File system (vfs), a disk File system, a block layer and an equipment driver layer, and finally reaches a storage device, and an interrupt notification driver is sent after the device processing is completed. The block layer connects the file system layer and the device driver layer, and from submit _ bio, bio (input/output of block device) enters the block layer, and these bio are abstracted by the block layer into request management, and when appropriate, these requests leave the block layer and enter the device driver layer. After the IO request is completed, the soft interrupt of the block layer is responsible for processing the work after the IO is completed.
An early block layer framework is a single-queue (single-queue) architecture, which is suitable for a "hardware single-queue" storage device (such as a mechanical Disk), and with the continuous development of Memory device technologies, more and more Memory devices supporting "hardware multi-queue" are provided, such as NVMe (Non-Volatile Memory express) SSD (Solid State Disk), so that a multi-queue (multi-queue) architecture is added in a block layer. The kernel has already added a multi-queue mechanism as early as 3.13, and multi-queue becomes more and more stable after years of development, and Linux 5.0+ has used multi-queue by default. At present, more and more scenes of NVMe SSDs are used by enterprise-level users, and with the evolution of PCIe (peripheral component interconnect express) technology, the bandwidths of PCIe 4.0 and PCIe 5.0 are larger and larger, the delay of processing IO is also reduced gradually, and the current multi-queue architecture is used in time, and a performance bottleneck is more embodied in software and in the current architecture, a software queue (soft context dispatch queue) is allocated to each CPU (central processing unit), and the same number of hardware context dispatch queues (hard context dispatch queues) are allocated according to the hardware queue (hard context queue) of the memory, and 1 or more software queues are mapped to 1 hardware queue through a fixed mapping relationship, and then the hardware queue and the hardware queue of the memory are corresponding. When the processing task of a certain I/O process is suddenly increased, I/O is accumulated in a corresponding software queue, and at the moment, no additional CPU resource is used for processing the increased I/O, partial I/O timeout is caused, the performance of the whole system is seriously influenced, and the performance bottleneck is formed. The I/O processing method provided by the application can improve the I/O processing efficiency of the storage system.
Referring to fig. 1, fig. 1 is a flowchart illustrating an I/O processing method according to an embodiment of the present disclosure.
The I/O processing method provided in the embodiment of the present application is applied to a block layer of a storage system, and may include the following steps:
step S101: the amount of available resources of the CPU already in use in the storage system is determined.
In practical applications, the block layer of the storage system may first determine the available resource amount of the CPU used in the storage system, so as to determine whether the I/O is blocked according to the available resource amount, and the type of the available resource may include the core number, the frequency, and the like.
Step S102: and acquiring the I/O to be processed.
In practical application, after determining the available resource amount of the CPU used in the storage system, the block layer of the storage system may obtain the I/O to be processed, and the type and source of the I/O to be processed may be determined according to actual needs, for example, the I/O to be processed may come from a disk file system, a drive layer, and the like of the storage system.
Step S103: the amount of CPU resources required to process the I/O to be processed is determined.
In practical application, after the block layer of the storage system obtains the I/O to be processed, the amount of the CPU resource required for processing the I/O to be processed, that is, the amount of the resource required to be consumed when the CPU is used to process the I/O to be processed, may be determined, so as to compare the amount of the CPU resource with the amount of the available resource, and determine whether the used CPU core can process the I/O to be processed without blocking.
Step S104: judging whether the CPU resource amount exceeds the available resource amount; if the CPU resource amount exceeds the available resource amount, step S105 is executed.
Step S105: increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and distributing the target I/O to the target software context distribution queue for processing.
In practical application, after determining the amount of CPU resources required for processing I/O to be processed, the block layer of the storage system can judge whether the amount of the CPU resources exceeds the amount of available resources; if the CPU resource amount exceeds the available resource amount, the I/O blockage of the used CPU is shown when the used CPU processes the I/O to be processed, at this time, in order to avoid the I/O blockage, the frequency of the used CPU can be increased, a target software context distribution queue is generated based on the frequency increase value of the used CPU, the target software context distribution queue is mapped to the existing hardware context distribution queue, the target I/O is selected from the I/O to be processed, and the target I/O is distributed to the target software context distribution queue for processing, so that the I/O to be processed is prevented from being accumulated at the used CPU.
The I/O processing method is applied to a block layer of a storage system and used for determining the available resource amount of a CPU (Central processing Unit) in the storage system; acquiring I/O to be processed; determining the amount of CPU resources required for processing I/O to be processed; judging whether the CPU resource amount exceeds the available resource amount; if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and allocating the target I/O to the target software context distribution queue for processing. In the application, the block layer of the storage system can determine the amount of the CPU resource required by the I/O to be processed, and can increase the frequency of the used CPU under the condition that the amount of the CPU resource exceeds the available amount of the used CPU, and generate the target software context distribution queue to process the I/O to be processed based on the frequency increase value of the used CPU, so that the processing efficiency of the I/O to be processed can be improved, and I/O blockage in the storage system can be avoided.
Referring to fig. 2, fig. 2 is a second flowchart of an I/O processing method according to an embodiment of the present disclosure.
The I/O processing method provided in the embodiment of the present application is applied to a block layer of a storage system, and may include the following steps:
step S201: the amount of available resources of the CPU already in use in the storage system is determined.
Step S202: and acquiring the I/O to be processed.
Step S203: the amount of CPU resources required to process the I/O to be processed is determined.
Step S204: judging whether the CPU resource amount exceeds the available resource amount; if the CPU resource amount exceeds the available resource amount, executing step S205; if the CPU resource amount does not exceed the available resource amount, step S206 is executed.
Step S205: increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and distributing the target I/O to the target software context distribution queue for processing.
Step S206: and reducing the frequency of the used CPU, deleting the existing software context distribution queue corresponding to the reduced frequency value of the used CPU, and canceling the mapping relation between the existing software context distribution queue and the hardware context distribution queue.
In practical application, in order to ensure that the adjustable CPU frequency is available for the next time, when it is determined that the amount of CPU resources does not exceed the amount of available resources, the frequency of the used CPU may be reduced, the existing software context distribution queue corresponding to the frequency reduction value of the used CPU may be deleted, and the mapping relationship between the existing software context distribution queue and the hardware context distribution queue may be cancelled, so that the reduced CPU frequency may be applied again. In addition, because the frequency of the used CPU is reduced, the excessive occupation of CPU resources by the I/O to be processed can be avoided, and the occupancy rate of the I/O to the CPU resources can be reduced.
In the I/O processing method provided by the embodiment of the present application, in the process of determining the available resource amount of the used CPU in the storage system, the block layer of the storage system may obtain the performance configuration information of the used CPU in order to quickly determine the available resource amount; an amount of available resources of the used CPU is determined based on the performance configuration information.
In a specific application scenario, in the process of acquiring the performance configuration information of the used CPU core, the block layer of the storage system may acquire the performance configuration information of the used CPU core through the driver layer of the storage system. For example, the block layer of the storage system may control the driver layer to send a corresponding Mailbox command to query the CPU pCode for performance configuration information, and the like.
In an I/O processing method provided by an embodiment of the present application, in a process of increasing a frequency of a used CPU, a block layer of a storage system may send a frequency increase instruction to an unused CPU through a driver layer of the storage system to increase the frequency of the used CPU. For example, the block layer of the storage system may increase the frequency of the used CPU by controlling the driver layer to issue a Control Mailbox command to the CPU pCode, and the like.
In an I/O processing method provided in an embodiment of the present application, after increasing the frequency of the used CPU, the block layer of the storage system may further update the performance parameter of the used CPU, so that the performance parameter of the used CPU is updated. The PERFORMANCE parameters of the CPU may be determined according to actual needs, and may include parameters such as Tj Max (joint Temperature Max, the maximum Temperature allowed for normal operation of the CPU), HWP (HARDWARE CONTROLLED PERFORMANCE state) MaxPL1, clm (scalable life management) P0/P1, and the like.
In an I/O processing method provided in an embodiment of the present application, in a process of determining a CPU resource amount required for processing an I/O to be processed, a block layer of a storage system may obtain a target factor of the I/O to be processed in order to quickly determine the CPU resource amount, where the type of the target factor includes a bandwidth value, a delay value, a congestion value, and the like; an amount of CPU resources is determined based on the target factor.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an I/O processing system according to an embodiment of the present disclosure.
An I/O processing system provided in an embodiment of the present application, applied to a block layer of a storage system, may include:
a first determining module 101, configured to determine an available resource amount of a used CPU in a storage system;
a first obtaining module 102, configured to obtain an I/O to be processed;
a second determining module 103, configured to determine an amount of CPU resources required for processing the to-be-processed I/O;
a first judging module 104, configured to judge whether the amount of the CPU resource exceeds the amount of the available resource; if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and allocating the target I/O to the target software context distribution queue for processing.
The I/O processing system provided in the embodiment of the present application is applied to a block layer of a storage system, and the first determining module may be further configured to: after judging whether the CPU resource amount exceeds the available resource amount, if the CPU resource amount does not exceed the available resource amount, reducing the frequency of the used CPU, deleting the existing software context distribution queue corresponding to the frequency reduction value of the used CPU, and canceling the mapping relation between the existing software context distribution queue and the hardware context distribution queue.
An I/O processing system provided in an embodiment of the present application is applied to a block layer of a storage system, where the first determining module may include:
the first acquisition submodule is used for acquiring the performance configuration information of the used CPU;
and the first determining submodule is used for determining the available resource amount of the used CPU based on the performance configuration information.
An I/O processing system provided in an embodiment of the present application is applied to a block layer of a storage system, where the first obtaining submodule may include:
the first acquisition unit is used for acquiring the performance configuration information of the used CPU through a driving layer of the storage system.
An I/O processing system provided in an embodiment of the present application is applied to a block layer of a storage system, where the first determining module may include:
a first increasing unit for sending a frequency increasing instruction to the unused CPU through a driving layer of the storage system to increase the frequency of the used CPU.
The I/O processing system provided in the embodiment of the present application is applied to a block layer of a storage system, and may further include:
and the first updating module is used for updating the performance parameters of the used CPU.
An I/O processing system provided in an embodiment of the present application is applied to a block layer of a storage system, where the second determining module may include:
the second acquisition unit is used for acquiring a target factor of the I/O to be processed, wherein the type of the target factor comprises a bandwidth value, a time delay value and a congestion value;
a first determination unit for determining the amount of CPU resources based on the target factor.
In practical applications, the connection mode of the I/O processing system provided by the present application in the storage system may be determined according to actual needs, for example, the I/O processing system provided by the present application may be connected with the storage system in the manner shown in fig. 4.
The application also provides an I/O processing device and a computer readable storage medium, which have corresponding effects of the I/O processing method provided by the embodiment of the application. Referring to fig. 5, fig. 5 is a schematic structural diagram of an I/O processing apparatus according to an embodiment of the present disclosure.
An I/O processing device provided in an embodiment of the present application is applied to a block layer of a storage system, and includes a memory 201 and a processor 202, where the memory 201 stores a computer program, and the processor 202 implements the following steps when executing the computer program:
determining the available resource amount of the used CPU in the storage system;
acquiring I/O to be processed;
determining the amount of CPU resources required for processing I/O to be processed;
judging whether the CPU resource amount exceeds the available resource amount;
if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and allocating the target I/O to the target software context distribution queue for processing.
An I/O processing device provided in an embodiment of the present application is applied to a block layer of a storage system, and includes a memory 201 and a processor 202, where the memory 201 stores a computer program, and the processor 202 implements the following steps when executing the computer program: after judging whether the CPU resource amount exceeds the available resource amount, if the CPU resource amount does not exceed the available resource amount, reducing the frequency of the used CPU, deleting the existing software context distribution queue corresponding to the frequency reduction value of the used CPU, and canceling the mapping relation between the existing software context distribution queue and the hardware context distribution queue.
An I/O processing device provided in an embodiment of the present application is applied to a block layer of a storage system, and includes a memory 201 and a processor 202, where the memory 201 stores a computer program, and the processor 202 implements the following steps when executing the computer program: acquiring performance configuration information of a used CPU; an amount of available resources of the used CPU is determined based on the performance configuration information.
An I/O processing device provided in an embodiment of the present application is applied to a block layer of a storage system, and includes a memory 201 and a processor 202, where the memory 201 stores a computer program, and the processor 202 implements the following steps when executing the computer program: and acquiring the performance configuration information of the used CPU through a driving layer of the storage system.
An I/O processing device provided in an embodiment of the present application is applied to a block layer of a storage system, and includes a memory 201 and a processor 202, where the memory 201 stores a computer program, and the processor 202 implements the following steps when executing the computer program: sending a frequency increase instruction to the unused CPU through a driver layer of the storage system to increase the frequency of the used CPU.
An I/O processing device provided in an embodiment of the present application is applied to a block layer of a storage system, and includes a memory 201 and a processor 202, where the memory 201 stores a computer program, and the processor 202 implements the following steps when executing the computer program: after increasing the frequency of the used CPU, the performance parameters of the used CPU are updated.
An I/O processing device provided in an embodiment of the present application is applied to a block layer of a storage system, and includes a memory 201 and a processor 202, where the memory 201 stores a computer program, and the processor 202 implements the following steps when executing the computer program: acquiring a target factor of I/O to be processed, wherein the type of the target factor comprises a bandwidth value, a time delay value and a congestion value; an amount of CPU resources is determined based on the target factor.
Referring to fig. 5, another I/O processing device provided in the embodiment of the present application may further include: an input port 203 connected to the processor 202, for transmitting externally input commands to the processor 202; a display unit 204 connected to the processor 202, for displaying the processing result of the processor 202 to the outside; and the communication module 205 is connected with the processor 202 and is used for realizing the communication between the I/O processing device and the outside world. The display unit 204 may be a display panel, a laser scanning display, or the like; the communication method adopted by the communication module 205 includes, but is not limited to, mobile high definition link technology (HML), Universal Serial Bus (USB), High Definition Multimedia Interface (HDMI), and wireless connection: wireless fidelity technology (WiFi), bluetooth communication technology, bluetooth low energy communication technology, ieee802.11s based communication technology.
The computer-readable storage medium provided in the embodiments of the present application is applied to a block layer of a storage system, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
determining the available resource amount of the used CPU in the storage system;
acquiring I/O to be processed;
determining the amount of CPU resources required for processing I/O to be processed;
judging whether the CPU resource amount exceeds the available resource amount;
if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and allocating the target I/O to the target software context distribution queue for processing.
The computer-readable storage medium provided in the embodiments of the present application is applied to a block layer of a storage system, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented: after judging whether the CPU resource amount exceeds the available resource amount, if the CPU resource amount does not exceed the available resource amount, reducing the frequency of the used CPU, deleting the existing software context distribution queue corresponding to the frequency reduction value of the used CPU, and canceling the mapping relation between the existing software context distribution queue and the hardware context distribution queue.
The computer-readable storage medium provided in the embodiments of the present application is applied to a block layer of a storage system, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented: acquiring performance configuration information of a used CPU; an amount of available resources of the used CPU is determined based on the performance configuration information.
The computer-readable storage medium provided in the embodiments of the present application is applied to a block layer of a storage system, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented: and acquiring the performance configuration information of the used CPU through a driving layer of the storage system.
The computer-readable storage medium provided in the embodiments of the present application is applied to a block layer of a storage system, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented: sending a frequency increase instruction to the unused CPU through a driver layer of the storage system to increase the frequency of the used CPU.
The computer-readable storage medium provided in the embodiments of the present application is applied to a block layer of a storage system, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented: after increasing the frequency of the used CPU, the performance parameters of the used CPU are updated.
The computer-readable storage medium provided in the embodiments of the present application is applied to a block layer of a storage system, in which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented: acquiring a target factor of I/O to be processed, wherein the type of the target factor comprises a bandwidth value, a time delay value and a congestion value; an amount of CPU resources is determined based on the target factor.
The computer-readable storage media to which this application relates include Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage media known in the art.
For a description of relevant parts in the I/O processing system, the device and the computer readable storage medium provided in the embodiments of the present application, reference is made to detailed descriptions of corresponding parts in the I/O processing method provided in the embodiments of the present application, and details are not repeated here. In addition, parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of corresponding technical solutions in the prior art, are not described in detail so as to avoid redundant description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An I/O processing method applied to a block layer of a storage system, comprising:
determining the available resource amount of the used CPU in the storage system;
acquiring I/O to be processed;
determining the amount of CPU resources required for processing the I/O to be processed;
judging whether the CPU resource amount exceeds the available resource amount;
if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and distributing the target I/O to the target software context distribution queue for processing.
2. The method of claim 1, wherein after determining whether the amount of CPU resources exceeds the amount of available resources, further comprising:
if the CPU resource amount does not exceed the available resource amount, reducing the frequency of the used CPU, deleting the existing software context distribution queue corresponding to the frequency reduction value of the used CPU, and canceling the mapping relation between the existing software context distribution queue and the hardware context distribution queue.
3. The method of claim 1, wherein determining the amount of available resources of the CPU in the storage system comprises:
acquiring performance configuration information of the used CPU;
determining the amount of available resources of the used CPU based on the performance configuration information.
4. The method of claim 3, wherein the obtaining performance configuration information of the used CPU comprises:
and acquiring the performance configuration information of the used CPU through a driving layer of the storage system.
5. The method of claim 1, wherein increasing the frequency of the used CPUs comprises:
sending a frequency increase instruction to the unused CPU through a driver layer of the storage system to increase the frequency of the used CPU.
6. The method of claim 1, wherein after increasing the frequency of the used CPU, further comprising:
and updating the performance parameters of the used CPU.
7. The method of any of claims 1 to 6, wherein said determining an amount of CPU resources required to process said I/O to be processed comprises:
acquiring a target factor of the I/O to be processed, wherein the type of the target factor comprises a bandwidth value, a time delay value and a congestion value;
determining the amount of CPU resources based on the target factor.
8. An I/O processing system applied to a block layer of a storage system, comprising:
the first determining module is used for determining the available resource amount of the used CPU in the storage system;
the first acquisition module is used for acquiring I/O to be processed;
a second determining module, configured to determine an amount of CPU resources required for processing the to-be-processed I/O;
the first judging module is used for judging whether the CPU resource amount exceeds the available resource amount; if the CPU resource amount exceeds the available resource amount, increasing the frequency of the used CPU, generating a target software context distribution queue based on the frequency increase value of the used CPU, mapping the target software context distribution queue to the existing hardware context distribution queue, selecting a target I/O from the I/O to be processed, and distributing the target I/O to the target software context distribution queue for processing.
9. An I/O processing device, comprising:
a memory for storing a computer program;
processor for implementing the steps of the I/O processing method according to any one of claims 1 to 7 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the I/O processing method according to any one of claims 1 to 7.
CN202011184858.XA 2020-10-29 2020-10-29 I/O processing method, system, equipment and computer readable storage medium Active CN112463028B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011184858.XA CN112463028B (en) 2020-10-29 2020-10-29 I/O processing method, system, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011184858.XA CN112463028B (en) 2020-10-29 2020-10-29 I/O processing method, system, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112463028A true CN112463028A (en) 2021-03-09
CN112463028B CN112463028B (en) 2023-01-10

Family

ID=74835652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011184858.XA Active CN112463028B (en) 2020-10-29 2020-10-29 I/O processing method, system, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112463028B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11914864B2 (en) 2021-07-01 2024-02-27 Samsung Electronics Co., Ltd. Storage device and method of data management on a storage device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818056A (en) * 2016-09-14 2018-03-20 杭州华为数字技术有限公司 A kind of queue management method and device
CN109683823A (en) * 2018-12-20 2019-04-26 湖南国科微电子股份有限公司 A kind of method and device managing the more concurrent requests of memory
CN110309001A (en) * 2018-03-27 2019-10-08 天津麒麟信息技术有限公司 A kind of optimization system and method based on the more queues of Linux generic block layer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818056A (en) * 2016-09-14 2018-03-20 杭州华为数字技术有限公司 A kind of queue management method and device
CN110309001A (en) * 2018-03-27 2019-10-08 天津麒麟信息技术有限公司 A kind of optimization system and method based on the more queues of Linux generic block layer
CN109683823A (en) * 2018-12-20 2019-04-26 湖南国科微电子股份有限公司 A kind of method and device managing the more concurrent requests of memory

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11914864B2 (en) 2021-07-01 2024-02-27 Samsung Electronics Co., Ltd. Storage device and method of data management on a storage device

Also Published As

Publication number Publication date
CN112463028B (en) 2023-01-10

Similar Documents

Publication Publication Date Title
US10628216B2 (en) I/O request scheduling method and apparatus by adjusting queue depth associated with storage device based on hige or low priority status
CN112099941B (en) Method, equipment and system for realizing hardware acceleration processing
WO2018049899A1 (en) Queue management method and apparatus
EP3255553A1 (en) Transmission control method and device for direct memory access
CN104102693A (en) Object processing method and device
CA2987807C (en) Computer device and method for reading/writing data by computer device
JP2014515526A (en) Resource allocation for multiple resources for dual operating systems
US11487478B2 (en) Memory system and method of controlling nonvolatile memory
KR20190047035A (en) Nonvolatile memory persistence method and computing device
US11010094B2 (en) Task management method and host for electronic storage device
CN113342256A (en) Storage device configured to support multiple hosts and method of operating the same
CN112463028B (en) I/O processing method, system, equipment and computer readable storage medium
CN112463027B (en) I/O processing method, system, equipment and computer readable storage medium
CN109995595B (en) RGW quota determining method, system, equipment and computer medium
US20230393782A1 (en) Io request pipeline processing device, method and system, and storage medium
CN111857996B (en) Interrupt processing method, system, equipment and computer readable storage medium
CN113157611B (en) Data transmission control method, device, equipment and readable storage medium
US11481341B2 (en) System and method for dynamically adjusting priority-based allocation of storage system resources
TW202303378A (en) Fairshare between multiple ssd submission queues
CN113672176A (en) Data reading method, system, equipment and computer readable storage medium
CN109947572B (en) Communication control method, device, electronic equipment and storage medium
WO2022142515A1 (en) Instance management method and apparatus, and cloud application engine
US20220164140A1 (en) Command slot management for memory devices
US8209449B2 (en) Method for enabling several virtual processing units to directly and concurrently access a peripheral unit
KR20230013828A (en) A system on chip and a operating method of the semiconductor package

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant