CN114138178B - IO processing method and system - Google Patents
IO processing method and system Download PDFInfo
- Publication number
- CN114138178B CN114138178B CN202111205383.2A CN202111205383A CN114138178B CN 114138178 B CN114138178 B CN 114138178B CN 202111205383 A CN202111205383 A CN 202111205383A CN 114138178 B CN114138178 B CN 114138178B
- Authority
- CN
- China
- Prior art keywords
- management module
- processed
- processor
- data
- processors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000007726 management method Methods 0.000 claims abstract description 152
- 238000013523 data management Methods 0.000 claims abstract description 46
- 238000012545 processing Methods 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims abstract description 38
- 239000007787 solid Substances 0.000 claims abstract description 26
- 230000004044 response Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005111 flow chemistry technique Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000013403 standard screening design Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0607—Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Hardware Redundancy (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides an IO processing method and system, wherein the method comprises the following steps: respectively deploying data management modules in a plurality of processors of the solid state disk, dividing the plurality of processors into a plurality of groups, and selecting a designated processor in each group according to a preset rule; taking the data management module deployed in the designated processor as a main management module, and taking the data management modules deployed in other processors in the group where the data management modules are located as standby management modules respectively; if the main management module receives the IO command, calculating the current processed data flow, and judging whether the current processed data flow exceeds the maximum processed flow; if the current processed data flow exceeds the maximum processing flow, the main management module respectively checks the corresponding data flow to be processed of each standby management module, so as to select one of the standby management modules as a selected management module, and forwards the IO command to the selected management module, so that the IO command is processed by the selected management module. The invention plays a role in managing data traffic.
Description
Technical Field
The invention relates to the technical field of computers, in particular to an IO processing method and system.
Background
For SSD (solid state drive) products, delay jitter in QoS (Quality of Service ) is a very important performance measure. The delay jitter can measure the stability of the SSD, and the smaller the jitter is, the better the stability of the SSD is.
Most of the conventional SSDs have multiple processors, because DM (data management module) is one of key modules affecting the performance of the SSD, theoretically, the greater the number of DMs, the higher the maximum processing capacity of the SSD, so in the prior art, one DM will be deployed on each processor, and each DM is equally located, and after receiving an IO command from a host, the controller will uniformly distribute the IO command to each DM for processing. In order to guarantee the maximum bandwidth, the number of DMs cannot be reduced.
The prior art has a great disadvantage: because other processing modules besides the DM in the SSD need to be deployed, and because each processor is deployed with one DM, a situation that a plurality of modules share one processor is necessary, so that one processor needs to be switched among tasks of the plurality of modules, delay jitter is larger, and QoS performance of the SSD is poor.
Disclosure of Invention
Therefore, the present invention is directed to an IO processing method and system, which are used for solving the problem in the prior art that delay jitter is large due to the need of processing a data management module and a plurality of other modules by each processor of a solid state disk.
Based on the above object, the present invention provides an IO processing method, comprising the steps of:
respectively deploying data management modules in a plurality of processors of the solid state disk, dividing the plurality of processors into a plurality of groups, and selecting a designated processor in each group according to a preset rule;
taking the data management module deployed in the designated processor as a main management module, and taking the data management modules deployed in other processors in the group where the designated processor is located as standby management modules respectively;
responding to the IO command received by the main management module, calculating the current processed data flow, and judging whether the current processed data flow exceeds the maximum processed flow;
and in response to the current processed data flow exceeding the maximum processing flow, respectively checking the corresponding data flow to be processed of each standby management module by the main management module so as to select one from each standby management module as a selected management module based on each data flow to be processed, and forwarding the IO command to the selected management module so as to enable the selected management module to process the IO command.
In some embodiments, selecting the specified processor in each group according to the preset rule includes:
the processor with the least tasks to be processed is selected as the designated processor in each group.
In some embodiments, selecting the least-tasked processor in each group as the designated processor comprises:
the processors in each group where only the data management module is disposed are selected as designated processors.
In some embodiments, selecting one of the backup management modules as the selected management module based on each pending data traffic comprises:
and selecting the standby management module with the least data traffic to be processed as the selected management module.
In some embodiments, the method further comprises:
and processing the IO command by the master management module in response to the currently processed data traffic not exceeding the maximum processed traffic.
In some embodiments, grouping the plurality of processors into groups includes:
the multiple processors are equally divided into groups.
In some embodiments, receiving the IO command by the primary management module includes:
the main management module receives the IO command of the command submitting module.
In some embodiments, the method further comprises:
and in response to the powering-on of the solid state disk, the same context descriptor is respectively allocated to each data management module.
In some embodiments, the method further comprises:
and responding to the main management module to forward the IO command to the selected management module, and releasing the context descriptor occupied by the main management module.
In another aspect of the present invention, there is also provided an IO processing system including:
the specified processor selection module is configured to deploy the data management module in the plurality of processors of the solid state disk respectively, divide the plurality of processors into a plurality of groups, and select the specified processors in each group according to a preset rule;
the classification module is configured to take the data management module deployed in the designated processor as a main management module, and take the data management modules deployed in other processors in the group where the designated processor is located as standby management modules respectively;
the judging module is configured to respond to the IO command received by the main management module, calculate the current processed data flow and judge whether the current processed data flow exceeds the maximum processed flow; and
and the IO processing module is configured to respond to the fact that the current processed data flow exceeds the maximum processing flow, and the main management module respectively checks the corresponding data flow to be processed of each standby management module so as to select one of the standby management modules as a selected management module based on each data flow to be processed and forward the IO command to the selected management module, so that the selected management module processes the IO command.
In yet another aspect of the present invention, there is also provided a computer readable storage medium storing computer program instructions which, when executed by a processor, implement the above-described method.
In yet another aspect of the present invention, there is also provided a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, performs the above method.
The invention has at least the following beneficial technical effects:
according to the IO processing method, the main management module and the standby management module are divided for the plurality of data management modules of the solid state disk, only the main management module is used under low task pressure, and the main management module forwards IO commands to the standby management module for processing under high task pressure, so that the effect of data flow management is achieved, delay jitter can be effectively reduced under the condition that the maximum bandwidth is unchanged, and further, the bandwidth performance requirement is met, and the QoS performance requirement of the solid state disk is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an IO processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an IO processing system provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a computer readable storage medium implementing an IO processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic hardware structure of a computer device for performing an IO processing method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiments of the present invention, all the expressions "first" and "second" are used to distinguish two non-identical entities with the same name or non-identical parameters, and it is noted that the "first" and "second" are only used for convenience of expression, and should not be construed as limiting the embodiments of the present invention. Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such as a process, method, system, article, or other step or unit that comprises a list of steps or units.
Based on the above object, in a first aspect of the embodiments of the present invention, an embodiment of an IO processing method is provided. Fig. 1 is a schematic diagram of an embodiment of an IO processing method provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
step S10, respectively deploying data management modules in a plurality of processors of the solid state disk, dividing the plurality of processors into a plurality of groups, and selecting a designated processor in each group according to a preset rule;
step S20, taking a data management module deployed in a designated processor as a main management module, and taking data management modules deployed in other processors in a group where the designated processor is located as standby management modules respectively;
step S30, responding to the IO command received by the main management module, calculating the current processed data flow, and judging whether the current processed data flow exceeds the maximum processed flow;
in step S40, in response to the currently processed data traffic exceeding the maximum processing traffic, the main management module checks the data traffic to be processed of each corresponding standby management module, so as to select one of each standby management module as a selected management module based on each data traffic to be processed, and forwards the IO command to the selected management module, so that the selected management module processes the IO command.
In the embodiment of the invention, IO represents data Input (Input) Output (Output).
The processor of the Solid State Disk (SSD), namely the master control, is essentially a small processor, and is similar to the mobile phone processor, and is an ARM architecture, even if the high-end SSD uses a RISC architecture, the master control is a third largest computing core except for a CPU (central processing unit) and a GPU (graphics processor) in the host, and is used for controlling the storage order of information and maintaining the normal operation of each flash memory unit.
According to the IO processing method provided by the embodiment of the invention, the main management module and the standby management module are divided for the plurality of data management modules of the solid state disk, only the main management module is used under low task pressure, and the main management module forwards IO commands to the standby management module for processing under high task pressure, so that the effect of data flow management is achieved, the delay jitter can be effectively reduced under the condition that the maximum bandwidth is unchanged, the bandwidth performance requirement is further met, and the QoS performance requirement of the solid state disk is improved.
In some embodiments, selecting the specified processor in each group according to the preset rule includes: the processor with the least tasks to be processed is selected as the designated processor in each group.
In some embodiments, selecting the least-tasked processor in each group as the designated processor comprises: the processors in each group where only the data management module is disposed are selected as designated processors.
In the above embodiment, if only the data management module is deployed in the designated processor, but no other modules are deployed, the designated processor may concentrate on processing tasks of the data management module, and does not need to switch between different tasks, so as to effectively reduce delay jitter, and ensure QoS (quality of service) performance of the solid state disk.
In some embodiments, selecting one of the backup management modules as the selected management module based on each pending data traffic comprises: and selecting the standby management module with the least data traffic to be processed as the selected management module.
In this embodiment, in order to ensure that the data processing corresponding to the IO command is more efficient, the standby management module with the smallest data flow to be processed is selected as the selected management module.
In some embodiments, the method further comprises: and processing the IO command by the master management module in response to the currently processed data traffic not exceeding the maximum processed traffic.
Specifically, a data management module is deployed in a plurality of processors of a solid state disk respectively, the plurality of processors are divided into a plurality of groups, and designated processors are selected in each group according to a preset rule; taking the data management module deployed in the designated processor as a main management module, and taking the data management modules deployed in other processors in the group where the designated processor is located as standby management modules respectively; if the main management module receives the IO command, calculating the current processed data flow, and judging whether the current processed data flow exceeds the maximum processed flow; and if the current processed data flow does not exceed the maximum processing flow, the main management module processes the IO command.
In some embodiments, grouping the plurality of processors into groups includes: the multiple processors are equally divided into groups.
In this embodiment, the grouping manner of the plurality of processors includes, but is not limited to, average grouping, and an appropriate grouping manner may be selected according to actual situations. If the average grouping mode is selected, the adjacent processors can be divided into a group according to the positions of the plurality of processors on the solid state disk.
In some embodiments, receiving the IO command by the primary management module includes: the main management module receives the IO command of the command submitting module.
In this embodiment, the command submission module is a SubQ Manager, which is responsible for the submission of SubQ instructions. In this embodiment, the configuration of the SubQ Manager is modified so that it can only send IO commands to the master management module, but cannot send IO commands to the slave management module.
In some embodiments, the method further comprises: and in response to the powering-on of the solid state disk, the same context descriptor is respectively allocated to each data management module.
In some embodiments, the method further comprises: and responding to the main management module to forward the IO command to the selected management module, and releasing the context descriptor occupied by the main management module.
In the above embodiment, the context descriptor includes Command, dataframe and the like. The descriptor refers to a file descriptor. The kernel accesses the file using a file descriptor (file descriptor). When an existing file or a newly created file is opened, the kernel returns a file descriptor. Reading and writing files also requires the use of file descriptors to specify the file to be read and written. The file descriptor is a handle represented by an unsigned integer that is used by the process to identify the open file. Each file descriptor will correspond to an open file, while different file descriptors will also point to the same file. The same file may be opened by different processes or may be opened multiple times in the same process. The file descriptor is associated with a file object that includes related information (e.g., an open mode of the file, a location type of the file, an initial type of the file, etc.), which is referred to as a context of the file.
The IO processing method of an exemplary embodiment of the present invention is as follows:
(1) Assuming that 8 processors are arranged on a Solid State Disk (SSD), a data management module (DM) is deployed on each processor, and when power-on initialization is performed, each DM is assigned with the same context descriptors such as Command, dataframe, so that each DM can exert the maximum data processing capacity.
(2) The 8 DMs are equally divided into two groups, one group is DM0, DM1, DM2 and DM3, the other group is DM4, DM5, DM6 and DM7, and each group selects the DM deployed on the processor with fewer tasks as a main DM (i.e., a main management module) and the other three as standby DMs (i.e., standby management modules). The lack of tasks means that only DM is deployed on the processor, and other modules are not deployed.
(3) The configuration of the sub-q Manager (command submission module) is modified to allow only the sub-q Manager to send IO commands to two master DMs and not to six slave DMs.
(4) After receiving the IO command, the main DM calculates the current processing flow, and if the current processing flow does not exceed the maximum processing capacity, the IO command is processed and the forwarding processing is not performed. Through flow calculation, if the current flow processed by the main DM exceeds the maximum processing capacity, the main DM further checks the flow processing conditions of 3 standby DM in the group where the DM is located, selects one standby DM with the least flow to be processed, and then forwards the IO command to the selected standby DM, so that the standby DM helps to process the IO command, and the main DM immediately releases the occupied context descriptor after successful forwarding.
Therefore, the main DM can process data and regulate flow, thereby playing a role in controlling data flow.
In a second aspect of the embodiment of the present invention, an IO processing system is also provided. FIG. 2 is a schematic diagram of an embodiment of an IO processing system provided by the present invention. As shown in fig. 2, an IO processing system includes: the specified processor selection module 10 is configured to deploy data management modules in a plurality of processors of the solid state disk respectively, divide the plurality of processors into a plurality of groups, and select the specified processors in each group according to a preset rule; the classification module 20 is configured to take a data management module deployed in a designated processor as a main management module, and take data management modules deployed in other processors in a group where the designated processor is located as standby management modules respectively; a judging module 30 configured to calculate a current processed data flow in response to the main management module receiving the IO command, and judge whether the current processed data flow exceeds a maximum processed flow; and an IO processing module 40 configured to, in response to the currently processed data traffic exceeding the maximum processing traffic, check, by the main management module, the data traffic to be processed of the corresponding standby management modules, respectively, to select one of the standby management modules as a selected management module based on the data traffic to be processed, and forward the IO command to the selected management module, so that the selected management module processes the IO command.
In some embodiments, the specified processor selection module 10 includes a task to be processed determination module configured to select the processor with the least tasks to be processed in each group as the specified processor.
In some embodiments, the task determination module to be processed includes a deployment module configured to select processors in each group that deploy only the data management module as designated processors.
In some embodiments, the IO processing module 40 includes a standby management module selection module configured to select a standby management module with the least data traffic to be processed as the selected management module.
In some embodiments, the system further comprises a master management module processing module configured to process the IO command by the master management module in response to the currently processed data traffic not exceeding the maximum processed traffic.
In some embodiments, the designated processor selection module 10 includes a grouping module configured to equally divide a plurality of processors into groups.
In some embodiments, the determination module 30 includes an IO command receiving module configured for the primary management module to receive IO commands from the command submitting module.
In some embodiments, the system further comprises a descriptor distribution module configured to distribute the same context descriptor for each data management module, respectively, in response to powering up the solid state disk.
In some embodiments, the system further comprises a descriptor release module configured to release context descriptors occupied by the primary management module in response to the primary management module forwarding the IO command to the selected management module.
According to the IO processing system provided by the embodiment of the invention, the main management module and the standby management module are divided for the plurality of data management modules of the solid state disk, and only the main management module is used under low task pressure, and the main management module forwards IO commands to the standby management module for processing under high task pressure, so that the effect of data flow management is achieved, the delay jitter can be effectively reduced under the condition that the maximum bandwidth is unchanged, the bandwidth performance requirement is further met, and the QoS performance requirement of the solid state disk is improved.
In a third aspect of the embodiment of the present invention, a computer readable storage medium is provided, and fig. 3 shows a schematic diagram of a computer readable storage medium for implementing an IO processing method according to an embodiment of the present invention. As shown in fig. 3, the computer-readable storage medium 3 stores computer program instructions 31.
The computer program instructions 31 when executed by a processor implement the steps of:
respectively deploying data management modules in a plurality of processors of the solid state disk, dividing the plurality of processors into a plurality of groups, and selecting a designated processor in each group according to a preset rule;
taking the data management module deployed in the designated processor as a main management module, and taking the data management modules deployed in other processors in the group where the designated processor is located as standby management modules respectively;
responding to the IO command received by the main management module, calculating the current processed data flow, and judging whether the current processed data flow exceeds the maximum processed flow;
and in response to the current processed data flow exceeding the maximum processing flow, respectively checking the corresponding data flow to be processed of each standby management module by the main management module so as to select one from each standby management module as a selected management module based on each data flow to be processed, and forwarding the IO command to the selected management module so as to enable the selected management module to process the IO command.
In some embodiments, selecting the specified processor in each group according to the preset rule includes: the processor with the least tasks to be processed is selected as the designated processor in each group.
In some embodiments, selecting the least-tasked processor in each group as the designated processor comprises: the processors in each group where only the data management module is disposed are selected as designated processors.
In some embodiments, selecting one of the backup management modules as the selected management module based on each pending data traffic comprises: and selecting the standby management module with the least data traffic to be processed as the selected management module.
In some embodiments, the steps further comprise: and processing the IO command by the master management module in response to the currently processed data traffic not exceeding the maximum processed traffic.
In some embodiments, grouping the plurality of processors into groups includes: the multiple processors are equally divided into groups.
In some embodiments, receiving the IO command by the primary management module includes: the main management module receives the IO command of the command submitting module.
In some embodiments, the steps further comprise: and in response to the powering-on of the solid state disk, the same context descriptor is respectively allocated to each data management module.
In some embodiments, the steps further comprise: and responding to the main management module to forward the IO command to the selected management module, and releasing the context descriptor occupied by the main management module.
It should be appreciated that all of the embodiments, features and advantages set forth above for an IO processing method according to the present disclosure apply equally to an IO processing system and storage medium according to the present disclosure without conflict.
In a fourth aspect of the embodiment of the present invention, there is also provided a computer device, including a memory 402 and a processor 401 as shown in fig. 4, where the memory 402 stores a computer program, and the computer program is executed by the processor 401 to implement the method of any one of the embodiments above.
Fig. 4 is a schematic hardware structure diagram of an embodiment of a computer device for performing an IO processing method according to the present invention. Taking the example of a computer device as shown in fig. 4, a processor 401 and a memory 402 are included in the computer device, and may further include: an input device 403 and an output device 404. The processor 401, memory 402, input device 403, and output device 404 may be connected by a bus or otherwise, for example in fig. 4. The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the IO processing system. The output 404 may include a display device such as a display screen.
The memory 402 is used as a non-volatile computer readable storage medium, and may be used to store a non-volatile software program, a non-volatile computer executable program, and modules, such as program instructions/modules corresponding to the IO processing method in the embodiments of the present application. Memory 402 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created by use of the IO processing method, and the like. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to the local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 401 executes various functional applications of the server and data processing, that is, implements the IO processing method of the above-described method embodiment, by running nonvolatile software programs, instructions, and modules stored in the memory 402.
Finally, it should be noted that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, RAM may be available in a variety of forms such as synchronous RAM (DRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP and/or any other such configuration.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items. The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.
Claims (12)
1. An IO processing method, characterized by comprising the steps of:
respectively deploying data management modules in a plurality of processors of the solid state disk, dividing the processors into a plurality of groups, and selecting a designated processor in each group according to a preset rule;
taking the data management module deployed in the designated processor as a main management module, and taking the data management modules deployed in other processors in the group where the designated processor is located as standby management modules respectively;
responding to the IO command received by the main management module, calculating the current processed data flow of the main management module, and judging whether the current processed data flow exceeds the maximum processed flow;
and in response to the currently processed data traffic exceeding the maximum processing traffic, respectively checking the data traffic to be processed of the corresponding standby management modules by the main management module to select one from the standby management modules as a selected management module based on the data traffic to be processed, and forwarding the IO command to the selected management module so that the selected management module processes the IO command.
2. The method of claim 1, wherein selecting a given processor in each group according to a preset rule comprises:
the processor with the least tasks to be processed is selected as the designated processor in each group.
3. The method of claim 2, wherein selecting the least-tasked processor in each group as the designated processor comprises:
processors in each group where only the data management module is deployed are selected as designated processors.
4. The method of claim 1, wherein selecting one of the standby management modules as the selected management module based on each pending data traffic comprises:
and selecting the standby management module with the least data traffic to be processed as the selected management module.
5. The method as recited in claim 1, further comprising:
and processing the IO command by the master management module in response to the currently processed data traffic not exceeding the maximum processed traffic.
6. The method of claim 1, wherein grouping the plurality of processors into groups comprises:
the plurality of processors are equally divided into a number of groups.
7. The method of claim 1, wherein the master management module receiving an IO command comprises:
the main management module receives the IO command of the command submitting module.
8. The method as recited in claim 1, further comprising:
and responding to the power-on of the solid state disk, and respectively distributing the same context descriptor for each data management module.
9. The method as recited in claim 8, further comprising:
and responding to the main management module to forward the IO command to the selected management module, and releasing the context descriptor occupied by the main management module.
10. An IO processing system, comprising:
the specified processor selection module is configured to deploy the data management module in a plurality of processors of the solid state disk respectively, divide the plurality of processors into a plurality of groups, and select the specified processors in each group according to a preset rule;
the classification module is configured to take the data management module deployed in the designated processor as a main management module, and take the data management modules deployed in other processors in the group where the designated processor is located as standby management modules respectively;
the judging module is configured to respond to the IO command received by the main management module, calculate the current processed data flow of the main management module and judge whether the current processed data flow exceeds the maximum processed flow of the main management module; and
and the IO processing module is configured to respond to the fact that the current processed data flow exceeds the maximum processing flow, check the data flow to be processed of each corresponding standby management module by the main management module respectively, select one from each standby management module as a selected management module based on each data flow to be processed, and forward the IO command to the selected management module so that the selected management module processes the IO command.
11. A computer readable storage medium, characterized in that computer program instructions are stored, which, when executed by a processor, implement the method of any one of claims 1-9.
12. A computer device comprising a memory and a processor, wherein the memory has stored therein a computer program which, when executed by the processor, performs the method of any of claims 1-9.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111205383.2A CN114138178B (en) | 2021-10-15 | 2021-10-15 | IO processing method and system |
PCT/CN2022/121848 WO2023061215A1 (en) | 2021-10-15 | 2022-09-27 | Io processing method and system, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111205383.2A CN114138178B (en) | 2021-10-15 | 2021-10-15 | IO processing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114138178A CN114138178A (en) | 2022-03-04 |
CN114138178B true CN114138178B (en) | 2023-06-09 |
Family
ID=80394213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111205383.2A Active CN114138178B (en) | 2021-10-15 | 2021-10-15 | IO processing method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114138178B (en) |
WO (1) | WO2023061215A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114138178B (en) * | 2021-10-15 | 2023-06-09 | 苏州浪潮智能科技有限公司 | IO processing method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016182756A1 (en) * | 2015-05-14 | 2016-11-17 | Apeiron Data Systems | Accessing multiple storage devices from multiple hosts without remote direct memory access (rdma) |
CN110995616A (en) * | 2019-12-06 | 2020-04-10 | 苏州浪潮智能科技有限公司 | Management method and device for large-flow server and readable medium |
CN111722797A (en) * | 2020-05-18 | 2020-09-29 | 西安交通大学 | SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device |
CN113254222A (en) * | 2021-07-13 | 2021-08-13 | 苏州浪潮智能科技有限公司 | Task allocation method and system for solid state disk, electronic device and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104965678A (en) * | 2015-07-01 | 2015-10-07 | 忆正科技(武汉)有限公司 | Solid-state storage control method and apparatus and solid-state storage device |
CN108021454A (en) * | 2017-12-28 | 2018-05-11 | 努比亚技术有限公司 | A kind of method, terminal and the computer-readable storage medium of processor load equilibrium |
CN108536394A (en) * | 2018-03-31 | 2018-09-14 | 北京联想核芯科技有限公司 | Order distribution method, device, equipment and medium |
CN114138178B (en) * | 2021-10-15 | 2023-06-09 | 苏州浪潮智能科技有限公司 | IO processing method and system |
-
2021
- 2021-10-15 CN CN202111205383.2A patent/CN114138178B/en active Active
-
2022
- 2022-09-27 WO PCT/CN2022/121848 patent/WO2023061215A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016182756A1 (en) * | 2015-05-14 | 2016-11-17 | Apeiron Data Systems | Accessing multiple storage devices from multiple hosts without remote direct memory access (rdma) |
CN110995616A (en) * | 2019-12-06 | 2020-04-10 | 苏州浪潮智能科技有限公司 | Management method and device for large-flow server and readable medium |
CN111722797A (en) * | 2020-05-18 | 2020-09-29 | 西安交通大学 | SSD and HA-SMR hybrid storage system oriented data management method, storage medium and device |
CN113254222A (en) * | 2021-07-13 | 2021-08-13 | 苏州浪潮智能科技有限公司 | Task allocation method and system for solid state disk, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023061215A1 (en) | 2023-04-20 |
CN114138178A (en) | 2022-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230145162A1 (en) | Queue protection using a shared global memory reserve | |
US9454407B2 (en) | Service resource allocation | |
US20130219404A1 (en) | Computer System and Working Method Thereof | |
US20070028239A1 (en) | Dynamic performance management for virtual servers | |
US11455170B2 (en) | Processing devices and distributed processing systems | |
US8381003B2 (en) | Memory relocation in computer for power saving | |
US9601180B2 (en) | Automatic partial array self-refresh | |
US11144464B2 (en) | Information processing device, access controller, information processing method, and computer program for issuing access requests from a processor to a sub-processor | |
WO2018228327A1 (en) | Memory allocation method, apparatus, electronic device, and computer storage medium | |
CN114138178B (en) | IO processing method and system | |
CN110637272A (en) | Dynamic maximum frequency limit for processing core group | |
DE102020114142A1 (en) | TECHNOLOGIES FOR INTERRUPT DISASSOCIATED QUEENING FOR MULTI-QUEUE I / O DEVICES | |
CN113110916A (en) | Virtual machine data reading and writing method, device, equipment and medium | |
WO2018228344A1 (en) | Internal memory compaction method and apparatus, electronic device and readable storage medium | |
US20200278804A1 (en) | Managing memory system quality of service (qos) | |
CN110838987B (en) | Queue current limiting method and storage medium | |
CN105320543B (en) | The method and apparatus for loading software module | |
CN110557432A (en) | cache pool balance optimization method, system, terminal and storage medium | |
CN114063907A (en) | Storage space allocation method, system, storage medium and equipment | |
US20120191896A1 (en) | Circuitry to select, at least in part, at least one memory | |
US10769092B2 (en) | Apparatus and method for reducing latency of input/output transactions in an information handling system using no-response commands | |
CN113448516B (en) | Data processing method, system, medium and equipment based on RAID card | |
CN115794317A (en) | Processing method, device, equipment and medium based on virtual machine | |
CN107273188B (en) | Virtual machine Central Processing Unit (CPU) binding method and device | |
JP2010244470A (en) | Distributed processing system and distributed processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |