CN113448516A - Data processing method, system, medium and equipment based on RAID card - Google Patents

Data processing method, system, medium and equipment based on RAID card Download PDF

Info

Publication number
CN113448516A
CN113448516A CN202110626707.3A CN202110626707A CN113448516A CN 113448516 A CN113448516 A CN 113448516A CN 202110626707 A CN202110626707 A CN 202110626707A CN 113448516 A CN113448516 A CN 113448516A
Authority
CN
China
Prior art keywords
processing
data
raid card
iops
processing cores
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110626707.3A
Other languages
Chinese (zh)
Other versions
CN113448516B (en
Inventor
邢科钰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yingxin Computer Technology Co Ltd
Original Assignee
Shandong Yingxin Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yingxin Computer Technology Co Ltd filed Critical Shandong Yingxin Computer Technology Co Ltd
Priority to CN202110626707.3A priority Critical patent/CN113448516B/en
Publication of CN113448516A publication Critical patent/CN113448516A/en
Application granted granted Critical
Publication of CN113448516B publication Critical patent/CN113448516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a data processing method, a system, a medium and equipment based on a RAID card, wherein the method comprises the following steps: the method comprises the steps that all processing cores of the RAID card controller are interconnected through a preset channel, and IOPS (input/output protection systems) loads of all the processing cores are respectively monitored; responding to that the IOPS load distribution of each processing core meets a first preset condition, and dynamically allocating tasks among the processing cores through a preset channel; and classifying the data to be processed and respectively distributing the classified data to different processing cores for processing in response to the fact that the IOPS load distribution of each processing core meets a second preset condition. The invention realizes the task load balance of each processing core through the dynamic task allocation; the data to be processed are classified and distributed to the corresponding processing cores for processing, so that the data processing is smoother and smoother, the data processing efficiency and data processing capacity can be improved, and the I/O read-write performance and the data processing stability are improved.

Description

Data processing method, system, medium and equipment based on RAID card
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method, system, medium, and device based on a RAID card.
Background
RAID cards are increasingly being used in critical large data facilities such as front-end servers, mass storage servers, and the like due to their excellent security, stability, high redundancy, and the like. The existing mainstream design of the RAID card based on hardware can basically meet the performance requirements in the aspects of physical bandwidth, interface protocol, data transmission, security mechanism, and the like, and process and store the basic data. The existing RAID card controller has a fixed processing unit, and generally has 2 processing units, each processing unit manages and controls a fixed number of interface devices, for example, a 9361-16I RAID card, which is a dual-core controller, and each processing core controls 8 SAS ports respectively. The core task processing mechanism belongs to a segmentation independent mechanism and executes own tasks respectively.
The existing RAID card has poor data stability performance under high I/O (data input/output) load, and mainly shows that data probably falls off when large-area I/O reads and writes, so that a processor cannot finish effective read-write processing. Moreover, the lack of balanced load among the processing cores affects data reading and writing and the performance of the RAID card, mainly expressed in high I/O latency and low data QoS. QoS (Quality of Service) refers to a network that can provide better Service capability for specified network communication by using various basic technologies, and is a security mechanism of the network, which is a technology for solving the problems of network delay and congestion. Therefore, the data processing capability of the current RAID card still needs to be improved.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a data processing method, system, medium and device based on a RAID card, so as to solve the problem of low data processing capability of the RAID card in the prior art.
Based on the above purpose, the present invention provides a data processing method based on a RAID card, including the following steps:
the method comprises the steps that all processing cores of the RAID card controller are interconnected through a preset channel, and IOPS (input/output protection systems) loads of all the processing cores are respectively monitored;
responding to that the IOPS load distribution of each processing core meets a first preset condition, and dynamically allocating tasks among the processing cores through a preset channel;
and in response to that the IOPS load distribution of each processing core meets a second preset condition, classifying the data to be processed and respectively sending the classified data to different processing cores for processing.
In some embodiments, the interconnecting the processing cores of the RAID card controller through a preset channel, and the monitoring the IOPS loads of the processing cores respectively includes: two processing cores of the RAID card controller are interconnected through a preset channel, and IOPS load capacity of each processing core is monitored respectively.
In some embodiments, in response to the IOPS load distribution of each processing core satisfying a first preset condition, dynamically allocating tasks among the processing cores through the preset channels includes: and in response to the IOPS load capacity of the first processing core exceeding a first preset threshold and the IOPS load capacity of the second processing core being less than a second preset threshold, dynamically distributing tasks to the first processing core and the second processing core through a preset channel so as to balance the task amount processed by each processing core.
In some embodiments, in response to that the IOPS load distribution of each processing core satisfies the second preset condition, classifying the data to be processed and distributing the classified data to different processing cores for processing includes: and in response to the fact that the IOPS load capacity of each processing core exceeds a third preset threshold, classifying the data to be processed according to the size of the data block, and distributing the classified data to different processing cores for processing.
In some embodiments, classifying the data to be processed by data block size includes: and classifying the data to be processed according to the size of the data block through a filter in the firmware of the RAID card.
In some embodiments, the dynamic assignment of tasks includes: and allocating identification numbers to the tasks to be processed, and allocating the tasks to be processed to the corresponding processing cores for processing based on the identification numbers.
In some embodiments, the interconnecting the processing cores of the RAID card controller through the preset channel further includes: and the processing cores of the RAID card controller are interconnected through a UPI channel.
In another aspect of the present invention, a RAID card based data processing system is further provided, including:
the monitoring module is configured to interconnect the processing cores of the RAID card controller through a preset channel and respectively monitor the IOPS load capacity of each processing core;
the task dynamic allocation module is configured to respond that the IOPS load distribution of each processing core meets a first preset condition, and dynamically allocate the tasks among the processing cores through a preset channel; and
and the data classification processing module is configured to classify the data to be processed and distribute the classified data to different processing cores for processing in response to that the IOPS load distribution of each processing core meets a second preset condition.
In yet another aspect of the present invention, there is also provided a computer readable storage medium storing computer program instructions which, when executed, implement any one of the methods described above.
In yet another aspect of the present invention, a computer device is provided, which includes a memory and a processor, the memory storing a computer program, the computer program executing any one of the above methods when executed by the processor.
The invention has at least the following beneficial technical effects:
1. according to the invention, the interconnection channel is provided for each processing core of the RAID card controller, so that the dynamic allocation of tasks among the processing cores is facilitated, the task load balance of each processing core is realized, the processing tasks are reduced for the processing core with large load, and the continuous and stable data processing is facilitated;
2. by classifying the data to be processed and distributing the classified data to the corresponding processing cores for processing, the processing cores can process a certain kind of data, and compared with the processing of scattered data, the data processing is smoother and smoother, the bandwidth can be utilized more efficiently, and the data processing efficiency can be improved;
3. the invention improves the data processing capability of the processing core of the RAID card and improves the I/O read-write performance and the stability of data processing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1 is a schematic diagram of a RAID card-based data processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a RAID card based data processing system provided in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer-readable storage medium for implementing a RAID card based data processing method according to an embodiment of the present invention;
fig. 4 is a schematic hardware configuration diagram of a computer device for executing a data processing method based on a RAID card according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two non-identical entities with the same name or different parameters, and it is understood that "first" and "second" are only used for convenience of expression and should not be construed as limiting the embodiments of the present invention. Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements does not include all of the other steps or elements inherent in the list.
In view of the above objects, a first aspect of the embodiments of the present invention provides an embodiment of a data processing method based on a RAID card. Fig. 1 is a schematic diagram illustrating an embodiment of a RAID card-based data processing method according to the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
step S10, interconnecting the processing cores of the RAID card controller through a preset channel, and respectively monitoring the IOPS load capacity of each processing core;
step S20, responding to that the IOPS load distribution of each processing core meets a first preset condition, and dynamically distributing tasks among each processing core through a preset channel;
and step S30, in response to that the IOPS load distribution of each processing core meets a second preset condition, classifying the data to be processed and distributing the classified data to different processing cores for processing.
Raid (redundant Arrays of Independent disks) represents a disk array, in which a large number of Independent disks are combined into a disk group with a large capacity, and the performance of the entire disk system is improved by the additive effect of data provided by individual disks. With this technique, data is divided into a plurality of sectors, each of which is stored on a respective hard disk. By placing data on multiple hard disks, input and output operations can be overlapped in a balanced manner, improving performance.
IOPS (Input/Output Per Second), i.e. the amount of data Input/Output (or the number of read/write operations) Per Second, is one of the main indicators for measuring the performance of a magnetic disk. The IOPS refers to the number of I/O requests that the system can process per unit time, generally in units of I/O requests per second, wherein the I/O requests are typically read or write data operation requests. For applications with frequent random read and write, such as small file storage (picture), OLTP database, and mail server, the random read and write performance is concerned, so IOPS is a key measurement index. For the application of frequent sequential reading and writing, a large amount of continuous data needs to be transmitted, such as video editing of a television station, video On demand (vod) (video On demand), and continuous reading and writing performance is concerned, so that the data throughput is a key measurement index.
IOPS loading: the IOPS of the processing core of each RAID card controller has an upper limit, and a core task load percentage value, namely the current working load, is obtained through the ratio of the IOPS processing amount of the current task of the processing core to the total IO processing amount of the processing core.
The embodiment of the invention provides the interconnection channel for each processing core of the RAID card controller, is beneficial to the dynamic allocation of tasks among the processing cores, realizes the task load balance of each processing core, lightens the processing task for the processing core with large load, and is beneficial to the continuous and stable data processing; by classifying the data to be processed and distributing the classified data to the corresponding processing cores for processing, the processing cores can process a certain kind of data, and compared with the processing of scattered data, the data processing is smoother and smoother, the credit bandwidth can be utilized more efficiently, and the data processing efficiency can be improved; the invention improves the data processing capability of the processing core of the RAID card and improves the I/O read-write performance and the stability of data processing.
In some embodiments, the interconnecting the processing cores of the RAID card controller through a preset channel, and the monitoring the IOPS loads of the processing cores respectively includes: two processing cores of the RAID card controller are interconnected through a preset channel, and IOPS load capacity of each processing core is monitored respectively.
In some embodiments, in response to the IOPS load distribution of each processing core satisfying a first preset condition, dynamically allocating tasks among the processing cores through the preset channels includes: and in response to the IOPS load capacity of the first processing core exceeding a first preset threshold and the IOPS load capacity of the second processing core being less than a second preset threshold, dynamically distributing tasks to the first processing core and the second processing core through a preset channel so as to balance the task amount processed by each processing core.
In this embodiment, when the RAID card controller is a dual processing core, the first preset threshold may be set to 70% and the second preset threshold may be set to 50%, that is, when the IOPS load of the first processing core is greater than 70% and the IOPS load of the second processing core is less than 50%, the tasks are dynamically allocated to the first processing core and the second processing core through the preset channel, so that the task amounts processed by the processing cores are balanced, and the problem of I/O processing performance degradation caused by a high load of a certain processing core is solved. The values of the first preset threshold and the second preset threshold are not limited to these values, and may be set according to actual conditions, but the condition that the first preset threshold is greater than the second preset threshold needs to be satisfied.
In some embodiments, in response to that the IOPS load distribution of each processing core satisfies the second preset condition, classifying the data to be processed and distributing the classified data to different processing cores for processing includes: and in response to the fact that the IOPS load capacity of each processing core exceeds a third preset threshold, classifying the data to be processed according to the size of the data block, and distributing the classified data to different processing cores for processing.
In some embodiments, classifying the data to be processed by data block size includes: and classifying the data to be processed according to the size of the data block through a filter in the firmware of the RAID card.
In this embodiment, the third preset threshold may also be 70%, and when the IOPS load of each processing core exceeds 70%, the data to be processed is classified according to the size of the data block by using a filter in the firmware of the RAID card, and the classified data is distributed to the corresponding processing core for processing. If the RAID card controller is a dual processing core, the data to be processed may be classified according to small data blocks and large data blocks, for example, the small data block range includes 4K, 8K, 16K, and 32K, and the large data block range includes 64K, 128K, 256K, 512K, and 1024K, and then, the small data blocks may be processed by the first processing core, and the large data blocks may be processed by the second processing core. The classification of the large data blocks and the small data blocks realizes the special purpose of the special core, and has the advantages that when the data bandwidth and the data processing task amount of the processor are certain, the bandwidth can be more efficiently utilized by classifying and then processing the relatively scattered data blocks, the data can be rapidly processed, the time delay is reduced, the overall stability of the data IO is finally improved, and the phenomenon that the performance in the data processing is greatly reduced is avoided. The setting of the third preset threshold is not limited to this, and may be set according to actual conditions.
In some embodiments, the dynamic assignment of tasks includes: and allocating identification numbers to the tasks to be processed, and allocating the tasks to be processed to the corresponding processing cores for processing based on the identification numbers.
In this embodiment, each task is assigned a unique identifier in an OS (operating system) kernel before being executed, so that the processor can perform task assignment and task execution.
In some embodiments, the interconnecting the processing cores of the RAID card controller through the preset channel further includes: and the processing cores of the RAID card controller are interconnected through a UPI channel.
In this embodiment, the UPI (ultra Path interconnect) represents a hyper-Path link, and the UPI bus may implement direct interconnection between chips, has characteristics of high bandwidth and high transmission, and provides a help for dynamic task allocation by using the UPI for interconnection between processing cores.
In other examples, the processed stable data is continuously and effectively transmitted and read and written depending on the cache capacity of the RAID card, the SAS bandwidth (channel connecting the RAID card and the disk), the PCIE bandwidth (interconnection channel between the CPU and the RAID card), and the disk bandwidth (data channel of each physical disk).
In a second aspect of the embodiments of the present invention, a data processing system based on a RAID card is also provided. FIG. 2 is a schematic diagram illustrating an embodiment of a RAID card based data processing system provided by the present invention. As shown in fig. 2, a RAID card based data processing system includes: the monitoring module 10 is configured to interconnect the processing cores of the RAID card controller through a preset channel, and monitor IOPS load amounts of the processing cores respectively; the task dynamic allocation module 20 is configured to respond that the IOPS load distribution of each processing core meets a first preset condition, and perform dynamic allocation of tasks among each processing core through a preset channel; and the data classification processing module 30 is configured to classify the data to be processed and distribute the classified data to different processing cores for processing in response to that the IOPS load distribution of each processing core meets a second preset condition.
According to the data processing system based on the RAID card, provided by the embodiment of the invention, the interconnection channel is provided for each processing core of the RAID card controller, so that the dynamic allocation of tasks among the processing cores is facilitated, the task load balance of each processing core is realized, the processing tasks are reduced for the processing cores with large loads, and the continuous and stable data processing is facilitated; by classifying the data to be processed and distributing the classified data to the corresponding processing cores for processing, the processing cores can process a certain kind of data, and compared with the processing of scattered data, the data processing is smoother and smoother, the bandwidth can be utilized more efficiently, and the data processing efficiency can be improved; the system of the invention improves the data processing capacity of the processing core of the RAID card and improves the I/O read-write performance and the stability of data processing.
In some embodiments, the monitoring module 10 is further configured to interconnect two processing cores of the RAID card controller through a preset channel, and monitor IOPS load of each processing core respectively.
In some embodiments, the dynamic task allocation module 20 is further configured to, in response to the IOPS capacity of the first processing core exceeding a first preset threshold and the IOPS capacity of the second processing core being less than a second preset threshold, dynamically allocate the tasks to the first processing core and the second processing core through the preset channels so as to balance the task amount processed by each processing core.
In this embodiment, when the RAID card controller is a dual processing core, the first preset threshold may be set to 70% and the second preset threshold may be set to 50%, that is, when the IOPS load of the first processing core is greater than 70% and the IOPS load of the second processing core is less than 50%, the tasks are dynamically allocated to the first processing core and the second processing core through the preset channel, so that the task amounts processed by the processing cores are balanced, and the problem of I/O processing performance degradation caused by a high load of a certain processing core is solved.
In some embodiments, the data classification processing module 30 is further configured to, in response to that the IOPS load of each processing core exceeds a third preset threshold, classify the data to be processed according to the size of the data block, and distribute the classified data to different processing cores for processing.
In some embodiments, the data sort processing module 30 includes a sorting module configured to sort the data to be processed by data block size through a filter in the firmware of the RAID card.
In this embodiment, the third preset threshold may also be 70%, and when the IOPS load of each processing core exceeds 70%, the data to be processed is classified according to the size of the data block by using a filter in the firmware of the RAID card, and the classified data is distributed to the corresponding processing core for processing.
In some embodiments, the task dynamic allocation module 20 includes an allocation module configured to allocate an identification number to the task to be processed, and allocate the task to be processed to the corresponding processing core for processing based on the identification number.
In some embodiments, the monitor module 10 includes an interconnect module configured to interconnect the processing cores of the RAID card controller via UPI channels.
In a third aspect of the embodiment of the present invention, a computer-readable storage medium is further provided, and fig. 3 is a schematic diagram of a computer-readable storage medium for implementing a RAID card-based data processing method according to an embodiment of the present invention. As shown in fig. 3, the computer-readable storage medium 3 stores computer program instructions 31, the computer program instructions 31 when executed implement the steps of:
the method comprises the steps that all processing cores of the RAID card controller are interconnected through a preset channel, and IOPS (input/output protection systems) loads of all the processing cores are respectively monitored;
responding to that the IOPS load distribution of each processing core meets a first preset condition, and dynamically allocating tasks among the processing cores through a preset channel;
and classifying the data to be processed and respectively distributing the classified data to different processing cores for processing in response to the fact that the IOPS load distribution of each processing core meets a second preset condition.
In some embodiments, the interconnecting the processing cores of the RAID card controller through a preset channel, and the monitoring the IOPS loads of the processing cores respectively includes: two processing cores of the RAID card controller are interconnected through a preset channel, and IOPS load capacity of each processing core is monitored respectively.
In some embodiments, in response to the IOPS load distribution of each processing core satisfying a first preset condition, dynamically allocating tasks among the processing cores through the preset channels includes: and in response to the IOPS load capacity of the first processing core exceeding a first preset threshold and the IOPS load capacity of the second processing core being less than a second preset threshold, dynamically distributing tasks to the first processing core and the second processing core through a preset channel so as to balance the task amount processed by each processing core.
In some embodiments, in response to that the IOPS load distribution of each processing core satisfies the second preset condition, classifying the data to be processed and distributing the classified data to different processing cores for processing includes: and in response to the fact that the IOPS load capacity of each processing core exceeds a third preset threshold, classifying the data to be processed according to the size of the data block, and distributing the classified data to different processing cores for processing.
In some embodiments, classifying the data to be processed by data block size includes: and classifying the data to be processed according to the size of the data block through a filter in the firmware of the RAID card.
In some embodiments, the dynamic assignment of tasks includes: and allocating identification numbers to the tasks to be processed, and allocating the tasks to be processed to the corresponding processing cores for processing based on the identification numbers.
In some embodiments, the interconnecting the processing cores of the RAID card controller through the preset channel further includes: and the processing cores of the RAID card controller are interconnected through a UPI channel.
It should be understood that all of the embodiments, features and advantages set forth above with respect to the RAID card based data processing method according to the present invention are equally applicable to the RAID card based data processing system and the storage medium according to the present invention without conflict therebetween.
In a fourth aspect of the embodiments of the present invention, there is further provided a computer device, including a memory 402 and a processor 401, where the memory stores a computer program, and the computer program, when executed by the processor, implements the method of any one of the above embodiments.
Fig. 4 is a schematic hardware configuration diagram of an embodiment of a computer device for executing a data processing method based on a RAID card according to the present invention. Taking the computer device shown in fig. 4 as an example, the computer device includes a processor 401 and a memory 402, and may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 4 illustrates an example of a connection by a bus. The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the RAID card-based data processing system. The output device 404 may include a display device such as a display screen.
The memory 402, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the RAID card-based data processing method in the embodiments of the present application. The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of a data processing method based on a RAID card, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to local modules via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 401 executes various functional applications of the server and data processing by running nonvolatile software programs, instructions and modules stored in the memory 402, that is, implements the RAID card-based data processing method of the above-described method embodiment.
Finally, it should be noted that the computer-readable storage medium (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items. The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A data processing method based on a RAID card is characterized by comprising the following steps:
the method comprises the steps that all processing cores of the RAID card controller are interconnected through a preset channel, and IOPS (input/output protection systems) loads of all the processing cores are respectively monitored;
responding to that the IOPS load distribution of each processing core meets a first preset condition, and dynamically allocating tasks among the processing cores through the preset channel;
and classifying the data to be processed and respectively distributing the classified data to different processing cores for processing in response to the fact that the IOPS load distribution of each processing core meets a second preset condition.
2. The method of claim 1, wherein interconnecting the processing cores of the RAID card controller through a preset channel, and monitoring IOPS loads of the processing cores respectively comprises:
two processing cores of the RAID card controller are interconnected through a preset channel, and IOPS load capacity of each processing core is monitored respectively.
3. The method of claim 2, wherein in response to the IOPS load distribution of each processing core satisfying a first preset condition, the dynamically allocating tasks among the processing cores through the preset channel comprises:
and in response to the fact that the IOPS load of the first processing core exceeds a first preset threshold and the IOPS load of the second processing core is less than a second preset threshold, dynamically distributing tasks to the first processing core and the second processing core through the preset channel so as to balance the task amount processed by each processing core.
4. The method according to claim 1 or 2, wherein in response to that the IOPS load distribution of each processing core satisfies a second preset condition, classifying the data to be processed and distributing the classified data to different processing cores for processing comprises:
and in response to the fact that the IOPS load capacity of each processing core exceeds a third preset threshold, classifying the data to be processed according to the size of the data block, and distributing the classified data to different processing cores for processing.
5. The method of claim 4, wherein classifying the data to be processed by data block size comprises:
and classifying the data to be processed according to the size of the data block through a filter in the firmware of the RAID card.
6. The method of claim 1, wherein the dynamic assignment of tasks comprises:
and allocating identification numbers to the tasks to be processed, and allocating the tasks to be processed to the corresponding processing cores for processing based on the identification numbers.
7. The method of claim 1, wherein interconnecting the processing cores of the RAID card controller via a predetermined channel further comprises:
and the processing cores of the RAID card controller are interconnected through a UPI channel.
8. A RAID card based data processing system comprising:
the monitoring module is configured to interconnect the processing cores of the RAID card controller through a preset channel and respectively monitor the IOPS load capacity of each processing core;
the task dynamic allocation module is configured to respond that the IOPS load distribution of each processing core meets a first preset condition, and dynamically allocate the tasks among the processing cores through the preset channel; and
and the data classification processing module is configured to classify the data to be processed and distribute the classified data to different processing cores for processing in response to that the IOPS load distribution of each processing core meets a second preset condition.
9. A computer-readable storage medium, characterized in that computer program instructions are stored which, when executed, implement the method according to any one of claims 1-7.
10. A computer device comprising a memory and a processor, characterized in that the memory has stored therein a computer program which, when executed by the processor, performs the method according to any one of claims 1-7.
CN202110626707.3A 2021-06-04 2021-06-04 Data processing method, system, medium and equipment based on RAID card Active CN113448516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110626707.3A CN113448516B (en) 2021-06-04 2021-06-04 Data processing method, system, medium and equipment based on RAID card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110626707.3A CN113448516B (en) 2021-06-04 2021-06-04 Data processing method, system, medium and equipment based on RAID card

Publications (2)

Publication Number Publication Date
CN113448516A true CN113448516A (en) 2021-09-28
CN113448516B CN113448516B (en) 2023-07-21

Family

ID=77810834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110626707.3A Active CN113448516B (en) 2021-06-04 2021-06-04 Data processing method, system, medium and equipment based on RAID card

Country Status (1)

Country Link
CN (1) CN113448516B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281531A (en) * 2021-12-10 2022-04-05 苏州浪潮智能科技有限公司 Method, system, storage medium and equipment for distributing CPU core

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184093A (en) * 2011-06-14 2011-09-14 复旦大学 Parallel computing array framework constructed by multi CELL processors
CN105528330A (en) * 2014-09-30 2016-04-27 杭州华为数字技术有限公司 Load balancing method and device, cluster and many-core processor
CN106980533A (en) * 2016-01-18 2017-07-25 杭州海康威视数字技术股份有限公司 Method for scheduling task, device and electronic equipment based on heterogeneous processor
CN111984407A (en) * 2020-08-07 2020-11-24 苏州浪潮智能科技有限公司 Data block read-write performance optimization method, system, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184093A (en) * 2011-06-14 2011-09-14 复旦大学 Parallel computing array framework constructed by multi CELL processors
CN105528330A (en) * 2014-09-30 2016-04-27 杭州华为数字技术有限公司 Load balancing method and device, cluster and many-core processor
CN106980533A (en) * 2016-01-18 2017-07-25 杭州海康威视数字技术股份有限公司 Method for scheduling task, device and electronic equipment based on heterogeneous processor
CN111984407A (en) * 2020-08-07 2020-11-24 苏州浪潮智能科技有限公司 Data block read-write performance optimization method, system, terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281531A (en) * 2021-12-10 2022-04-05 苏州浪潮智能科技有限公司 Method, system, storage medium and equipment for distributing CPU core
CN114281531B (en) * 2021-12-10 2023-11-03 苏州浪潮智能科技有限公司 Method, system, storage medium and equipment for distributing CPU cores

Also Published As

Publication number Publication date
CN113448516B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN104750557B (en) A kind of EMS memory management process and memory management device
DE102019122363A1 (en) PROGRAMMABLE DOUBLE-ROW WORK MEMORY MODULE ACCELERATOR CARD (DIMM ACCELERATOR CARD)
US20150127880A1 (en) Efficient implementations for mapreduce systems
CN103703450A (en) Method and apparatus for SSD storage access
CN111159436A (en) Method and device for recommending multimedia content and computing equipment
CN109981702B (en) File storage method and system
CN115033184A (en) Memory access processing device and method, processor, chip, board card and electronic equipment
US20190065404A1 (en) Adaptive caching in a storage device
DE112021004473T5 (en) STORAGE-LEVEL LOAD BALANCING
CN114595043A (en) IO (input/output) scheduling method and device
CN113448516B (en) Data processing method, system, medium and equipment based on RAID card
CN114138178B (en) IO processing method and system
CN113342265A (en) Cache management method and device, processor and computer device
CN111338579A (en) Read-write cache optimization method, system, terminal and storage medium based on storage pool
CN110032469A (en) Data processing system and its operating method
CN111078514A (en) GPU storage system verification method
CN110990148A (en) Method, device and medium for optimizing storage performance
CN116560560A (en) Method for storing data and related device
Gu et al. Dynamic file cache optimization for hybrid SSDs with high-density and low-cost flash memory
US20180267714A1 (en) Managing data in a storage array
CN114968073A (en) Data prefetching method, equipment and system
CN111142808A (en) Access device and access method
CN107273188B (en) Virtual machine Central Processing Unit (CPU) binding method and device
CN117724833B (en) PCIe tag cache self-adaptive resource allocation method and device based on stream attribute
CN104572903A (en) Data input control method for Hbase database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant