CN116661708A - Processing method of read-write task based on hard disk array and electronic equipment - Google Patents

Processing method of read-write task based on hard disk array and electronic equipment Download PDF

Info

Publication number
CN116661708A
CN116661708A CN202310951712.0A CN202310951712A CN116661708A CN 116661708 A CN116661708 A CN 116661708A CN 202310951712 A CN202310951712 A CN 202310951712A CN 116661708 A CN116661708 A CN 116661708A
Authority
CN
China
Prior art keywords
hard disk
data
accelerated
sub
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310951712.0A
Other languages
Chinese (zh)
Other versions
CN116661708B (en
Inventor
朱红玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310951712.0A priority Critical patent/CN116661708B/en
Publication of CN116661708A publication Critical patent/CN116661708A/en
Application granted granted Critical
Publication of CN116661708B publication Critical patent/CN116661708B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Digital Magnetic Recording (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

The embodiment of the application provides a processing method of a read-write task based on a hard disk array and electronic equipment, wherein the method comprises the following steps: determining a sub hard disk to be accelerated, of which the data read-write time delay is smaller than a preset value, from a plurality of sub hard disks contained in a current hard disk array; determining a hard disk acceleration array for accelerating the sub hard disk to be accelerated, wherein the hard disk acceleration array is generated according to the sub hard disk to be accelerated and a cooperative hard disk corresponding to the sub hard disk to be accelerated; and processing the target read-write task of the sub hard disk to be accelerated by using the hard disk acceleration array. The application solves the technical problem of how to improve the processing performance of the read-write task of the hard disk array aiming at the hard disk array comprising the hard disk with slow response.

Description

Processing method of read-write task based on hard disk array and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of storage, in particular to a processing method of a read-write task based on a hard disk array and electronic equipment.
Background
At present, the existing redundant hard disk array (RAID) is a hard disk array formed by combining a plurality of independent hard disks, but when the hard disk array is used for loading the read-write business of a user, the condition that the response speed of one hard disk is slow often occurs, so that the user experience is influenced. Aiming at the defect of slow response of the hard disk, the related technology can only temporarily remove the hard disk with slow response, and restore new data to the hard disk in a reconstruction mode after the hard disk is restored. However, these techniques have obvious limitations, and a large amount of data needs to be read in the reconstruction process, which seriously affects the performance of processing the read-write task of the hard disk array.
Accordingly, in the related art, there is a technical problem of how to improve the processing performance of the read/write task of the hard disk array for the hard disk array including the hard disk having a slow response.
In the related art, for a hard disk array including a hard disk with slow response, there is a technical problem of how to improve the processing performance of the read-write task of the hard disk array, and no effective solution has been proposed yet.
Disclosure of Invention
The embodiment of the application provides a processing method and electronic equipment for a read-write task based on a hard disk array, which at least solve the technical problem of how to improve the processing performance of the read-write task of the hard disk array aiming at the hard disk array containing a hard disk with slow response.
According to one embodiment of the present application, there is provided a method for processing a read-write task based on a hard disk array, including: determining a sub hard disk to be accelerated, of which the data read-write time delay is smaller than a preset value, from a plurality of sub hard disks contained in a current hard disk array; determining a hard disk acceleration array for accelerating the sub hard disk to be accelerated, wherein the hard disk acceleration array is generated according to the sub hard disk to be accelerated and a cooperative hard disk corresponding to the sub hard disk to be accelerated; and processing the target read-write task of the sub hard disk to be accelerated by using the hard disk acceleration array.
In an exemplary embodiment, determining, from a plurality of sub-hard disks included in a current hard disk array, a sub-hard disk to be accelerated with a data read-write delay less than a preset value includes: acquiring data read-write time delay of the plurality of sub-hard disks when executing read-write tasks; determining a first sub hard disk with data read-write time delay larger than a first preset value from the plurality of sub hard disks; the first preset value represents the minimum data read-write time delay of the hard disk; determining a second sub hard disk with the maximum data read-write time delay from the plurality of first sub hard disks; and if the maximum data read-write time delay is smaller than a second preset value, determining the second sub hard disk as the sub hard disk to be accelerated, wherein the second preset value represents the historical data read-write time delay when the hard disk fails.
In an exemplary embodiment, before determining the hard disk acceleration array for accelerating the child hard disk to be accelerated, the collaborative hard disk is determined by one of the following methods: determining a hard disk block unit of each sub hard disk in the plurality of sub hard disks, wherein the hard disk block of each sub hard disk at least comprises a data block unit, a check block unit and a redundancy block unit; determining the sum of the spaces of the redundant block units of the plurality of sub-hard disks as the redundant hard disk space of the current hard disk array, and determining the cooperative hard disk according to the redundant hard disk space; determining a peripheral hard disk with the same hard disk type as the child hard disk to be accelerated, and determining the peripheral hard disk as the collaborative hard disk; wherein the peripheral hard disk represents a hard disk other than the plurality of sub hard disks.
In an exemplary embodiment, using the hard disk acceleration array to process the target read-write task of the sub-hard disk to be accelerated includes: determining the data read-write time delay of the sub hard disk to be accelerated and the data read-write time delay of the cooperative hard disk; determining the hard disk with smaller data read-write time delay in the sub hard disk to be accelerated and the collaborative hard disk as a data write hard disk; executing the target read-write task by using the data writing hard disk, and storing first data written by the target read-write task to the data writing hard disk; and determining a storage address of the first data in the data writing hard disk, and storing the storage address to the sub hard disk to be accelerated.
In an exemplary embodiment, using the hard disk acceleration array to process the target read-write task of the sub-hard disk to be accelerated includes: executing the target read-write task by using the sub hard disk to be accelerated; storing the second data written by the target read-write task to the sub hard disk to be accelerated; the second data written by the target read-write task correspondingly has a first storage address in the sub hard disk to be accelerated; and generating a storage record of the second data according to the first storage address.
In one exemplary embodiment, generating a storage record of the second data from the first storage address includes: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the first storage address, and determining the binding result as a storage record of the second data; and storing the storage record into a storage space of the hard disk acceleration array.
In one exemplary embodiment, storing the storage record to a storage space of the hard disk acceleration array includes: performing persistence operation on the storage record to obtain persistence data; determining storage spaces set for a plurality of controllers under the condition that the hard disk acceleration array corresponds to the plurality of controllers, wherein the storage spaces comprise a main storage space and a backup storage space; and respectively storing the persistent data into the main storage space and the backup storage space.
In an exemplary embodiment, after storing the second data written by the target read-write task to the child hard disk to be accelerated, the method further includes: acquiring a first transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk under the condition that the monitored data read-write time delay of the sub hard disk to be accelerated is larger than a third preset value; the third preset value represents the average value of data read-write time delays of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; and transferring the data of the child hard disk to be accelerated according to the first transfer instruction.
In an exemplary embodiment, obtaining a first transfer instruction for instructing to transfer data of the child hard disk to be accelerated to the cooperative hard disk includes: sending prompt information for prompting the data of the child hard disk to be accelerated to be transferred to the collaborative hard disk to a target object; if the response information sent by the target object is received, and the response information contains a second transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk, transferring the data of the sub hard disk to be accelerated according to the first transfer instruction includes: and transferring the data of the child hard disk to be accelerated according to the second transfer instruction.
In an exemplary embodiment, obtaining a first transfer instruction for instructing to transfer data of the child hard disk to be accelerated to the cooperative hard disk includes: determining a data transmission channel of the sub hard disk to be accelerated for transmitting the second data; if the response time of the sub hard disk to be accelerated is determined to be larger than a fifth preset value under the condition that the data transmission rate of the data transmission channel is monitored to be smaller than the fourth preset value, generating a third transfer instruction for transferring the data of the sub hard disk to be accelerated to the collaborative hard disk; the fourth preset value represents the average value of the data transmission rates of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks, and the fifth preset value represents the average value of the response times of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; transferring the data of the child hard disk to be accelerated according to the first transfer instruction comprises the following steps: and transferring the data of the child hard disk to be accelerated according to the third transfer instruction.
In one exemplary embodiment, the data transmission rate at which the data transmission channel is monitored is determined by: under the condition that response information generated when the sub hard disk to be accelerated executes the target read-write task is monitored, determining the execution time of the sub hard disk to be accelerated for executing the target read-write task; determining the total data quantity transmitted by the sub hard disk to be accelerated through the data transmission channel in the execution time; and determining the difference between the total data amount and the execution time as the data transmission rate.
In an exemplary embodiment, transferring the data of the child hard disk to be accelerated according to the first transfer instruction includes: under the condition that the target read-write task is determined to be reading data, acquiring the second data from the sub hard disk to be accelerated according to the first storage address; writing the second data into the cooperative hard disk, and determining a second storage address of the second data in the cooperative hard disk; generating a storage record of the second data according to the second storage address; storing third data written by the target read-write task to the collaborative hard disk under the condition that the target read-write task is determined to be the written data; the third data written by the target read-write task correspondingly have a third storage address in the collaborative hard disk; determining fourth data written into the child hard disk to be accelerated before the writing time of the third data, and writing the fourth data into the collaborative hard disk; and determining a fourth storage address of the fourth data in the collaborative hard disk, and generating storage records of the third data and the fourth data according to the third storage address and the third storage address.
In an exemplary embodiment, transferring the data of the child hard disk to be accelerated according to the first transfer instruction includes: determining a first storage area of the child hard disk to be accelerated, wherein the first storage area represents a storage space corresponding to the first storage address; determining a second non-storage area of the sub hard disk to be accelerated, wherein the second non-storage area represents a storage space corresponding to other storage addresses except the first storage address in the storage record; and transferring the data in the storage area of the sub hard disk to be accelerated according to the first transfer instruction under the condition that the effective data amount of the non-storage area is smaller than a sixth preset value.
In an exemplary embodiment, transferring the data of the child hard disk to be accelerated according to the first transfer instruction further includes: under the condition that the child hard disk to be accelerated is determined to have faults, determining fifth data lost when the child hard disk to be accelerated has faults according to the data of the other child hard disks; writing the fifth data into the cooperative hard disk, and determining a fifth storage address of the fifth data in the cooperative hard disk; and generating a storage record of the fifth data according to the fifth storage address.
In an exemplary embodiment, after transferring the data of the child hard disk to be accelerated according to the first transfer instruction, the method further includes: under the condition that the target read-write task is determined to be executed, if the storage record is determined to be used for indicating that all data written by the target read-write task are stored in the collaborative hard disk, determining that the data of the sub hard disk to be accelerated are transferred to the collaborative hard disk; replacing the sub hard disk to be accelerated contained in the current hard disk array with the collaborative hard disk to obtain an updated target hard disk array; and processing the array read-write task of the current hard disk array by using the updated target hard disk array.
In an exemplary embodiment, after storing the second data written by the target read-write task to the child hard disk to be accelerated, the method further includes: generating a fourth transfer instruction for transferring the data of the collaborative hard disk to the sub hard disk to be accelerated under the condition that the monitored response time of the sub hard disk to be accelerated is smaller than a seventh preset value; the seventh preset value represents the average value of response time of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; and transferring the data of the collaborative hard disk according to the fourth transfer instruction.
In an exemplary embodiment, transferring the data of the collaborative hard disk according to the fourth transfer instruction includes: determining a second storage area of the collaborative hard disk, wherein the second storage area represents a storage space of data written into the collaborative hard disk; traversing the data blocks in the second storage area, and writing the data of the data blocks in the second storage area into the sub hard disk to be accelerated through a data stripe window; the data blocks in the second storage area are correspondingly provided with sixth storage addresses in the child hard disk to be accelerated; and generating a storage record of the data block in the second storage area according to the sixth storage address.
In an exemplary embodiment, traversing the data blocks in the second storage area, writing the data of the data blocks in the second storage area into the sub-hard disk to be accelerated through a data stripe window, including: traversing the data blocks in the second storage area; determining a plurality of groups of data blocks in the second storage area according to the traversing result, wherein a sequence exists among the plurality of groups of data blocks; and writing all data corresponding to the plurality of groups of data blocks into the child hard disk to be accelerated through the data stripe window according to the sequence.
In an exemplary embodiment, writing all data corresponding to the plurality of groups of data blocks into the child hard disk to be accelerated through the data stripe window according to the sequence, including: for each group of data blocks in the plurality of groups of data blocks, determining a group of data from the 1 st data block of each group of data blocks to the i th data block of each group of data blocks, wherein i represents the number of data blocks of each group of data blocks, i is a positive integer, and i is smaller than or equal to the number of data blocks allowed to pass through the data stripe window once; determining the writing sequence corresponding to each group of data blocks based on the sequence; and writing one group of data of each group of data blocks into the sub-hard disk to be accelerated through the data stripe window according to the writing sequence until all the data are written into the sub-hard disk to be accelerated through the data stripe window.
According to another embodiment of the present application, there is provided a hard disk array including: the method comprises the steps of waiting for acceleration of a sub hard disk, wherein the sub hard disk waiting for acceleration corresponds to a cooperative hard disk; the sub hard disk to be accelerated comprises a plurality of sub hard disks, wherein the data read-write time delay of the sub hard disk to be accelerated is smaller than a preset value, and the data read-write time delay is determined from the plurality of sub hard disks contained in the current hard disk array; and the hard disk array is used for processing the target read-write task of the sub hard disk to be accelerated.
According to a further embodiment of the application, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the application there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the method and the device, the sub-hard disk to be accelerated can be determined from the plurality of sub-hard disks contained in the hard disk array according to the data read-write time delay, and further a hard disk acceleration array for accelerating the sub-hard disk to be accelerated is generated according to the sub-hard disk to be accelerated and the cooperative hard disk corresponding to the sub-hard disk to be accelerated, and the target read-write task of the sub-hard disk to be accelerated is processed by using the hard disk acceleration array; by adopting the technical scheme, the technical problem of how to improve the processing performance of the read-write task of the hard disk array is solved for the hard disk array comprising the hard disk with slow response, the performance of the hard disk array when processing the read-write task is improved, and the processing efficiency of the read-write task is further improved.
Drawings
FIG. 1 is a schematic diagram of a hard disk array according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a hard disk array according to an embodiment of the present application;
FIG. 3 is a flow chart of a method for processing a read-write task based on a hard disk array according to an embodiment of the application;
FIG. 4 is a schematic diagram of generating an acceleration array for a hard disk in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of a hard disk partition according to an embodiment of the present application;
FIG. 6 is a schematic diagram (one) of transferring hard disk data according to an embodiment of the present application;
FIG. 7 is a schematic diagram (II) of transferring hard disk data according to an embodiment of the present application;
FIG. 8 schematically illustrates a block diagram of a computer system for an electronic device implementing an embodiment of the application;
fig. 9 schematically shows a schematic diagram of an electronic device for implementing an embodiment of the application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The embodiment of the present application may operate on the hard disk array architecture shown in fig. 1, as shown in fig. 1, fig. 1 is a schematic diagram (a) of an architecture of a hard disk array according to an embodiment of the present application, and the hard disk array may include, for example, 5 hard disks, i.e., hard disks 1 to 5 hard disks, where each hard disk may include a plurality of hard disk blocks, and a stripe 1 may be generated according to the hard disk blocks of each hard disk. The stripe data of stripe 1 may include four data blocks and one check block, for example, the hard disks of hard disks 1 to 4 in stripe 1 are data blocks and the hard disks of hard disks 5 in stripe 1 are check blocks, where the value of the check block is equal to the exclusive or operation result of the data of the four data blocks. If the hard disk 5 loses the data of the check block due to the failure, the data of the check block of the hard disk 5 can be recovered by the data of the data block of the other hard disk.
Alternatively, the embodiment of the present application may also operate on the hard disk array architecture shown in fig. 2, as shown in fig. 2, where fig. 2 is a schematic diagram (two) of an architecture of a hard disk array according to an embodiment of the present application, and the hard disk array may include, for example, 4 hard disks, i.e., hard disks 1 to 4, where each hard disk may include a plurality of hard disk blocks, and stripe 2 may be generated according to the hard disk blocks of each hard disk. Stripe 2 may further comprise 2 groups of stripes (not shown), wherein stripe data of one group of stripes may comprise hard disk blocks of hard disk 1 and hard disk 2, stripe data of another group of stripes may comprise hard disk blocks of hard disk 3 and hard disk 4, wherein a hard disk located in any of the 2 groups of stripes may backup data of each other's hard disk blocks. If the hard disk 1 loses data due to faults, the data of the hard disk 2 can be used as a data recovery basis to recover the data of the faulty hard disk 1.
In this embodiment, a method for processing a read-write task based on a hard disk array running on the storage system architecture is provided, and fig. 3 is a flowchart of a method for processing a read-write task based on a hard disk array according to an embodiment of the present application, as shown in fig. 3, where the flowchart includes the following steps:
step S302, determining a sub hard disk to be accelerated, of which the data read-write time delay is smaller than a preset value, from a plurality of sub hard disks contained in a current hard disk array;
step S304, determining a hard disk acceleration array for accelerating the sub hard disk to be accelerated, wherein the hard disk acceleration array is generated according to the sub hard disk to be accelerated and a cooperative hard disk corresponding to the sub hard disk to be accelerated;
and step S306, processing the target read-write task of the sub hard disk to be accelerated by using the hard disk acceleration array.
Through the steps, determining the sub hard disk to be accelerated, of which the data read-write time delay is smaller than a preset value, from a plurality of sub hard disks contained in the current hard disk array; determining a hard disk acceleration array for accelerating the sub hard disk to be accelerated, wherein the hard disk acceleration array is generated according to the sub hard disk to be accelerated and a cooperative hard disk corresponding to the sub hard disk to be accelerated; and processing the target read-write task of the sub hard disk to be accelerated by using the hard disk acceleration array. Therefore, aiming at the hard disk array comprising the hard disk with slow response, the technical problem of how to improve the processing performance of the read-write task of the hard disk array is solved, the performance of the hard disk array when processing the read-write task is improved, and the processing efficiency of the read-write task is further improved.
The main execution body of the above steps may be a server, a terminal, or the like, but is not limited thereto.
Optionally, in step S302, the read-write latency may represent a complete set of data read-write processes, i.e., a data write process, typically including reading an address to be written and writing data to the address to be written; if only the read process is involved, this is denoted as read latency.
In an exemplary embodiment, for the implementation process of determining, in the step S302, that the data read-write delay is smaller than the preset value from the plurality of sub-hard disks included in the current hard disk array, the implementation process of the sub-hard disk to be accelerated may further include: step S11, acquiring data read-write time delay of the plurality of sub-hard disks when executing read-write tasks; step S12, determining a first sub hard disk with data read-write time delay larger than a first preset value from the plurality of sub hard disks; the first preset value represents the minimum data read-write time delay of the hard disk; step S13, determining a second sub hard disk with the maximum data read-write time delay from the plurality of first sub hard disks; and S14, if the maximum data read-write time delay is smaller than a second preset value, determining the second sub hard disk as the sub hard disk to be accelerated, wherein the second preset value represents the historical data read-write time delay when the hard disk fails.
Through the above embodiment, the first preset value is, for example, 0, so that it can be determined that the plurality of sub-hard disks included in the current hard disk array are all normal operation hard disks, and then determine the sub-hard disks to be accelerated with larger data read-write delay in the normal operation process.
Optionally, in other embodiments, if it is determined that the maximum data read-write delay is greater than the second preset value, determining that the hard disk fails, and feeding back the data to the user to process the data, and determining a third sub-hard disk with the maximum data read-write delay again, if the maximum data read-write delay is less than the second preset value, determining the third sub-hard disk as the sub-hard disk to be accelerated.
In an exemplary embodiment, before determining the hard disk acceleration array for accelerating the child hard disk to be accelerated in step 204, the collaborative hard disk may be determined by one of the following methods: determining a hard disk block unit of each sub hard disk in the plurality of sub hard disks, wherein the hard disk block of each sub hard disk at least comprises a data block unit, a check block unit and a redundancy block unit; determining the sum of the spaces of the redundant block units of the plurality of sub-hard disks as the redundant hard disk space of the current hard disk array, and determining the cooperative hard disk according to the redundant hard disk space; determining a peripheral hard disk with the same hard disk type as the child hard disk to be accelerated, and determining the peripheral hard disk as the collaborative hard disk; wherein the peripheral hard disk represents a hard disk other than the plurality of sub hard disks.
The data partitioning unit, for example, represents a storage space of data, and may be preset based on requirements, for example, 1M.
Wherein, the data block unit is shown as a data block in fig. 5, the check block unit is shown as a check block in fig. 5, and the redundancy block unit is shown as a redundancy block in fig. 5.
In this embodiment, the peripheral hard disk is a hard disk independent of the current hard disk array. The redundant hard disk space may be formed into the virtual hard disk B by using a redundant space reserved for each member hard disk (i.e., sub-hard disk) of the hard disk array including a plurality of hard disks. As shown in fig. 5, the redundant blocks are hashed on the member hard disks in the hard disk array, and the redundant hard disk space can be determined according to the sum of all the redundant blocks.
In an exemplary embodiment, for the implementation process of using the hard disk acceleration array to process the target read-write task of the sub-hard disk to be accelerated in the step S306, the implementation process may specifically include the following steps: step S21, determining the data read-write time delay of the sub hard disk to be accelerated and the data read-write time delay of the collaborative hard disk; step S22, determining the hard disk with smaller data read-write time delay in the sub hard disk to be accelerated and the collaborative hard disk as a data write hard disk; step S23, the data writing hard disk is used for executing the target reading and writing task, and the first data written by the target reading and writing task is stored into the data writing hard disk; and step S24, determining the storage address of the first data in the data writing hard disk, and storing the storage address to the sub hard disk to be accelerated.
By setting the hard disk with smaller data read-write time delay as the initial hard disk for data writing, the efficiency of writing data when the target read-write task of the sub hard disk to be accelerated is processed by using the hard disk acceleration array can be improved.
In an exemplary embodiment, further, other technical solutions for implementing the processing of the target read-write task of the child hard disk to be accelerated by using the hard disk acceleration array in the step S306 are further provided, and specifically include the following steps: step S31, executing the target read-write task by using the sub hard disk to be accelerated; step S32, storing the second data written by the target read-write task to the sub hard disk to be accelerated; the second data written by the target read-write task correspondingly has a first storage address in the sub hard disk to be accelerated; and step S33, generating a storage record of the second data according to the first storage address.
By setting the sub-hard disk to be accelerated as the initial hard disk for data writing, the storage address for data writing can be kept unchanged when the target read-write task of the sub-hard disk to be accelerated is processed by using the hard disk acceleration array, so that the probability of data loss is reduced.
In an exemplary embodiment, the generating the storage record of the second data according to the first storage address in the step S33 may be implemented by the following specific steps: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the first storage address, and determining the binding result as a storage record of the second data; and storing the storage record into a storage space of the hard disk acceleration array.
In an exemplary embodiment, an implementation procedure of how to store the storage record in the storage space of the hard disk acceleration array is further proposed, which specifically includes: performing persistence operation on the storage record to obtain persistence data; determining storage spaces set for a plurality of controllers under the condition that the hard disk acceleration array corresponds to the plurality of controllers, wherein the storage spaces comprise a main storage space and a backup storage space; and respectively storing the persistent data into the main storage space and the backup storage space.
If a certain controller fails, any other controller can execute the read-write task of the failed controller.
By the embodiment, the data can be duplicated and backed up, so that the data can be recovered in time.
In an exemplary embodiment, after the second data written by the target read-write task is stored in the to-be-accelerated sub-hard disk in the step S32, the following steps are further performed: step S3201: acquiring a first transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk under the condition that the monitored data read-write time delay of the sub hard disk to be accelerated is larger than a third preset value; the third preset value represents the average value of data read-write time delays of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; step S3202: and transferring the data of the child hard disk to be accelerated according to the first transfer instruction.
Optionally, under the condition that the monitored data read-write time delay of the sub hard disk to be accelerated is smaller than a third preset value, a first transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk is not acquired, and the second data written by the target read-write task is continuously stored to the sub hard disk to be accelerated.
In an exemplary embodiment, for the execution process of obtaining the first migration instruction in step S3201, where the first migration instruction is used to instruct to migrate the data of the child hard disk to be accelerated to the collaborative hard disk, the execution process may be implemented by the following specific steps: sending prompt information for prompting the data of the child hard disk to be accelerated to be transferred to the collaborative hard disk to a target object; if the response information sent by the target object is received, and the response information contains a second transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk, transferring the data of the sub hard disk to be accelerated according to the first transfer instruction includes: and transferring the data of the child hard disk to be accelerated according to the second transfer instruction.
Through the embodiment, when the hard disk with slow response cannot be recovered for a long time, a prompt can be sent to a user according to the degree of slow response, the user can confirm that the hard disk with slow response is replaced by the cooperative hard disk according to requirements, and the data of the sub hard disk to be accelerated is passively transferred through the user instruction, so that the credibility of the data transfer process is improved.
In an exemplary embodiment, the process of obtaining the first transfer instruction for indicating to transfer the data of the child hard disk to be accelerated to the cooperative hard disk in the step S3201 may be implemented by the following steps: determining a data transmission channel of the sub hard disk to be accelerated for transmitting the second data; if the response time of the sub hard disk to be accelerated is determined to be larger than a fifth preset value under the condition that the data transmission rate of the data transmission channel is monitored to be smaller than the fourth preset value, generating a third transfer instruction for transferring the data of the sub hard disk to be accelerated to the collaborative hard disk; the fourth preset value represents the average value of the data transmission rates of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks, and the fifth preset value represents the average value of the response times of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; transferring the data of the child hard disk to be accelerated according to the first transfer instruction comprises the following steps: and transferring the data of the child hard disk to be accelerated according to the third transfer instruction.
Optionally, if the data transmission rate of the data transmission channel is greater than a fourth preset value, a first transfer instruction for indicating to transfer the data of the child hard disk to be accelerated to the collaborative hard disk is not generated; and under the condition that the response time of the sub hard disk to be accelerated is smaller than a fifth preset value, a third transfer instruction for transferring the data of the sub hard disk to be accelerated to the collaborative hard disk is not generated.
Through the embodiment, when the hard disk with slow response cannot be recovered for a long time, the data of the child hard disk to be accelerated can be actively transferred, and the transfer efficiency of the data transfer process is improved.
In one exemplary embodiment, the data transmission rate at which the data transmission channel is monitored is determined by: under the condition that response information generated when the sub hard disk to be accelerated executes the target read-write task is monitored, determining the execution time of the sub hard disk to be accelerated for executing the target read-write task; determining the total data quantity transmitted by the sub hard disk to be accelerated through the data transmission channel in the execution time; and determining the difference between the total data amount and the execution time as the data transmission rate.
Further, in an exemplary embodiment, a process for executing the transferring of the data of the child hard disk to be accelerated according to the first transferring instruction in the step S3202 is provided, and the specific steps include: under the condition that the target read-write task is determined to be reading data, acquiring the second data from the sub hard disk to be accelerated according to the first storage address; writing the second data into the cooperative hard disk, and determining a second storage address of the second data in the cooperative hard disk; generating a storage record of the second data according to the second storage address; storing third data written by the target read-write task to the collaborative hard disk under the condition that the target read-write task is determined to be the written data; the third data written by the target read-write task correspondingly have a third storage address in the collaborative hard disk; determining fourth data written into the child hard disk to be accelerated before the writing time of the third data, and writing the fourth data into the collaborative hard disk; and determining a fourth storage address of the fourth data in the collaborative hard disk, and generating storage records of the third data and the fourth data according to the third storage address and the third storage address.
Optionally, in the foregoing embodiment, generating the storage record of the second data according to the second storage address may include: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the first storage address, and determining the binding result as a storage record of the second data; and storing the storage record into a storage space of the hardware acceleration array.
Optionally, in the foregoing embodiment, determining a fourth storage address of the fourth data in the collaborative hard disk, and generating, according to the third storage address and the third storage address, a storage record of the third data and the fourth data may include, for example, the following steps: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the third storage address, and determining the binding result as a storage record of the fourth data; and storing the storage record into a storage space of the hardware acceleration array.
In an exemplary embodiment, the implementation process of transferring the data of the child hard disk to be accelerated according to the first transfer instruction in the step S3202 may further include the following manners: determining a first storage area of the child hard disk to be accelerated, wherein the first storage area represents a storage space corresponding to the first storage address; determining a second non-storage area of the sub hard disk to be accelerated, wherein the second non-storage area represents a storage space corresponding to other storage addresses except the first storage address in the storage record; and transferring the data in the storage area of the sub hard disk to be accelerated according to the first transfer instruction under the condition that the effective data amount of the non-storage area is smaller than a sixth preset value.
Alternatively, in the above-described embodiment, the effective data amount may be expressed as a data amount of non-empty data other than the storage area itself inherent information, which is system information of the storage area.
The sixth preset value may be preset based on requirements, for example, 1 byte, where the sixth preset value indicates a value when valid data is not stored.
Optionally, under the condition that the effective data amount of the non-storage area is determined to be larger than a sixth preset value, transferring the data in the storage area of the sub hard disk to be accelerated is not performed according to the first transfer instruction.
In an exemplary embodiment, the implementation process of transferring the data of the child hard disk to be accelerated according to the first transfer instruction in the step S3202 may further include the following manners: under the condition that the child hard disk to be accelerated is determined to have faults, determining fifth data lost when the child hard disk to be accelerated has faults according to the data of the other child hard disks; writing the fifth data into the cooperative hard disk, and determining a fifth storage address of the fifth data in the cooperative hard disk; and generating a storage record of the fifth data according to the fifth storage address.
Optionally, in the foregoing embodiment, generating the storage record of the fifth data according to the fifth storage address includes: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the fourth storage address, and determining the binding result as a storage record of the fifth data; and storing the storage record into a storage space of the hardware acceleration array.
In an exemplary embodiment, after completing the transferring of the data of the child hard disk to be accelerated according to the first transferring instruction in the step S3202, the following needs to be executed: under the condition that the target read-write task is determined to be executed, if the storage record is determined to be used for indicating that all data written by the target read-write task are stored in the collaborative hard disk, determining that the data of the sub hard disk to be accelerated are transferred to the collaborative hard disk; replacing the sub hard disk to be accelerated contained in the current hard disk array with the collaborative hard disk to obtain an updated target hard disk array; and processing the array read-write task of the current hard disk array by using the updated target hard disk array.
In an exemplary embodiment, after the second data written by the target read-write task in the step S32 is stored in the sub hard disk to be accelerated, the following technical scheme needs to be implemented: generating a fourth transfer instruction for transferring the data of the collaborative hard disk to the sub hard disk to be accelerated under the condition that the monitored response time of the sub hard disk to be accelerated is smaller than a seventh preset value; the seventh preset value represents the average value of response time of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; and transferring the data of the collaborative hard disk according to the fourth transfer instruction.
In an exemplary embodiment, further, a technical solution for transferring data of the collaborative hard disk according to the fourth transfer instruction is provided, where the specific steps include: determining a second storage area of the collaborative hard disk, wherein the second storage area represents a storage space of data written into the collaborative hard disk; traversing the data blocks in the second storage area, and writing the data of the data blocks in the second storage area into the sub hard disk to be accelerated through a data stripe window; the data blocks in the second storage area are correspondingly provided with sixth storage addresses in the child hard disk to be accelerated; and generating a storage record of the data block in the second storage area according to the sixth storage address.
Optionally, in the foregoing embodiment, generating the storage record of the data partition in the second storage area according to the sixth storage address includes: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the sixth storage address, and determining the binding result as a storage record of the data block in the second storage area; and storing the storage record into a storage space of the hardware acceleration array.
In an exemplary embodiment, the process of further traversing the data blocks in the second storage area and writing the data of the data blocks in the second storage area into the child hard disk to be accelerated through a data stripe window is described, and the specific steps include: traversing the data blocks in the second storage area; determining a plurality of groups of data blocks in the second storage area according to the traversing result, wherein a sequence exists among the plurality of groups of data blocks; and writing all data corresponding to the plurality of groups of data blocks into the child hard disk to be accelerated through the data stripe window according to the sequence.
In an exemplary embodiment, the implementation process of writing all data corresponding to the multiple groups of data blocks into the sub-hard disk to be accelerated through the data stripe window according to the sequence is described by the following steps: for each group of data blocks in the plurality of groups of data blocks, determining a group of data from the 1 st data block of each group of data blocks to the i th data block of each group of data blocks, wherein i represents the number of data blocks of each group of data blocks, i is a positive integer, and i is smaller than or equal to the number of data blocks allowed to pass through the data stripe window once; determining the writing sequence corresponding to each group of data blocks based on the sequence; and writing one group of data of each group of data blocks into the sub-hard disk to be accelerated through the data stripe window according to the writing sequence until all the data are written into the sub-hard disk to be accelerated through the data stripe window.
Optionally, in one embodiment, in a process of transferring the data of the collaborative hard disk according to the fourth transfer instruction, under a condition that sixth data written by the target read-write task is received, determining a storage address allocated to the collaborative hard disk for the sixth data; wherein the sixth data has at least one data block in the allocated memory address; writing the sixth data into the sub-hard disk to be accelerated and the collaborative hard disk under the condition that the block serial number of the at least one data block in the collaborative hard disk is smaller than the window minimum value of a data transfer window; suspending writing the sixth data into the sub-hard disk to be accelerated or the collaborative hard disk under the condition that the block serial number of the at least one data block in the collaborative hard disk is determined to be larger than the minimum window value of the data transfer window and smaller than the maximum window value of the data transfer window; writing the sixth data into the sub-hard disk to be accelerated under the condition that the block serial number of the at least one data block in the cooperative hard disk is determined to be larger than the maximum window value of a data transfer window; the window value of the data transfer window represents the block sequence number in the data block being transferred in the collaborative hard disk, the window minimum value represents the minimum block sequence number in the data block being transferred in the collaborative hard disk, and the window maximum value represents the maximum block sequence number in the data block being transferred in the collaborative hard disk.
Further, in one embodiment, the process of generating the hard disk acceleration array may be described, for example, in conjunction with fig. 4, where fig. 4 is a schematic diagram of generating the hard disk acceleration array according to an embodiment of the present application. The hard disk array a shown in fig. 4 (i.e., the current hard disk acceleration array described above) adopts the same hard disk array as that of fig. 1, and includes 1 to 5 sub-hard disks. Under the condition that the hard disk 5 is determined to be the sub hard disk to be accelerated, the hard disk 6 is determined to be a cooperative hard disk corresponding to the sub hard disk to be accelerated, and then the target read-write task of the hard disk 5 is processed by using the hard disk arrays B (namely the hard disk acceleration arrays) generated by the hard disk 5 and the hard disk 6. The generated hard disk array B can be regarded as the hard disk 5 of the array a, and the read-write task of the hard disk 5 is continuously processed.
Alternatively, in the present embodiment, for example, a stripe redundancy manner of exclusive or operation of the hard disk array as in fig. 2 is adopted to generate the hard disk 6 through the redundant spaces of the hard disks 1 to 5.
In the embodiment, by the mode that the hard disk with slow response and the collaborative hard disk jointly form the RAID-like structure, the performance of the same type of hard disk is improved, the processing pressure of a single hard disk when the response is slow is reduced, and meanwhile, the new RAID-like structure can also be read and written through the persistent memory data.
Next, in one embodiment, a process of using the hard disk acceleration array to accelerate the target read-write task of the sub-hard disk may be described in conjunction with the following steps, and specifically, the whole process may include an array closing stage, an array collaboration stage, an array replacement stage, and an array exit stage.
Taking the hard disk arrays of fig. 1 and fig. 2 as an example, the hard disk arrays of fig. 1 and fig. 2 are both in an array closing stage, that is, the array closing stage indicates that the hard disk array does not have sub-hard disks to be accelerated, and the array is in a normal running state without participation of cooperative hard disks.
Taking fig. 5 as an example, the array collaboration phase may represent a phase in which the hard disk array a and the hard disk array B accelerating the slow-response hard disk 5 in the hard disk array a operate together, for example. Specifically, when the hard disk array a operates, if it is detected that the response speed of the member hard disk (i.e., the sub hard disk) 5 is slower than that of the other member hard disks, determining the free hard disk 6 of the non-member hard disk, configuring the hard disk 6 as a cooperative hard disk of the member hard disk 5, and jointly processing the service that needs to be processed by the hard disk 5 using the hard disk array B (i.e., the hard disk acceleration array) generated by the hard disk 6 and the hard disk 5. The hard disk array B is a RAID-like array formed by the hard disk 5 and the hard disk 6. In the array cooperation stage, the hard disk array B accelerates the hard disk 5 with slow response, so that the performance of a single same-class hard disk is improved.
Optionally, in the foregoing embodiment, when the hard disk array a or the hard disk array is running, the data read-write delay of the member hard disk may be counted in real time.
If the hard disk 5 with slow response needs to be replaced by the cooperative hard disk 6, an array replacement phase is entered, and in the array replacement phase, gradual replacement is realized by reading the data of the old hard disk 5 with slow response onto the new cooperative hard disk 6.
Alternatively, the above-mentioned data transfer process of the co-replacement phase and the co-ejection phase are opposite, for example, the co-replacement phase indicates that the data of the hard disk 5 is transferred to the new co-hard disk 6, and the co-ejection phase transfers the data of the co-hard disk 6 to the hard disk 5.
After the array replacement stage or the array exit stage is completed, the array closing stage is entered. If the member hard disk with slow response (corresponding to the sub hard disk to be accelerated with the data read-write time delay smaller than the preset value) exists in the hard disk array A again in the array exit stage, the array cooperation stage is re-entered.
Optionally, in the array collaboration stage, the hard disk array B records the storage addresses of the write data respectively stored in the hard disk 5 and the hard disk 6, where the storage unit of the storage address may be consistent with the size of the hard disk partition of the hard disk array, for example.
The storage address corresponds to the number of the member hard disk, for example, for a hard disk array including two hard disks, a space of one bit may be used to record the data storage address of the hard disk partition of each hard disk array.
Optionally, in the array collaboration stage, when the hard disk array B is used to execute the target read-write task to write data, the hard disk array B may count the current data read-write time delays of the hard disk 5 and the hard disk 6, fetch the hard disk write data with smaller read-write time delay from the hard disk 5 and the hard disk 6, and record the storage address of the write data.
The hard disk 5 may be set as an initial hard disk for writing data, and the storage address for writing data is kept unchanged and still located in the hard disk a when the read-write task of the hard disk 5 is processed by using the hard disk array B.
Optionally, in the array collaboration stage, the data read-write condition of the hard disk 5 is continuously monitored, and if the data volume read-write by the hard disk 5 after a period of time is lower than the average data volume read-write by the member hard disk of the hard disk array a, the user is prompted that the response speed of the hard disk 5 is too slow. Or, when the data volume read and written by the hard disk 5 is consistent with the average data volume read and written by the member hard disk of the hard disk array A, but the data read and write time delay of the hard disk 5 is obviously longer than the average data read and write time delay of the member hard disk of the hard disk array A, the user is prompted that the response speed of the hard disk 5 is too slow.
Based on the above steps, in one embodiment, if the user confirms that the hard disk 5 with slow response is replaced by the cooperative hard disk 6 according to the requirement, the array replacement stage is entered, and if the user confirms that the hard disk 5 with slow response is not replaced by the cooperative hard disk 6 according to the requirement, the array cooperation stage is still maintained.
That is, when the hard disk 5 has a slow response for a long time, there is the following manner of shifting to the array replacement stage:
mode 1, if the user confirms that the old slow-response hard disk 5 is replaced by the cooperative hard disk 6, it is possible to designate that the old slow-response hard disk 5 is replaced by the cooperative hard disk 6, which belongs to the passive entry array replacement phase;
in the mode 2, when the old hard disk 5 with slow response loses the response, the data of the hard disk 5 can be actively transferred to the hard disk 6 step by step, and the hard disk 6 is cooperated to automatically replace the old hard disk 5 with slow response, so that the method belongs to an active entry array substitution stage.
Alternatively, in this embodiment, it is not necessary to query all the storage addresses one by one to determine whether the data corresponding to the storage addresses is empty, but directly determine the address where the data is stored and the address where the data is not stored from the storage record, copy the stored data in the address where the data is stored, and directly copy the address where the data is not stored (i.e. empty data), so that the speed of gradually transferring the data of the hard disk 5 to the hard disk 6 is increased, and copy acceleration is realized.
For the stripe of the hard disk array a, it may be applied whether one of the stripes of the hard disk array a has data written therein, and when deleting data, the corresponding record may be deleted in the stripe.
In the array replacement stage and the array exit stage, the storage addresses corresponding to the stored data respectively, for example, the initial hard disk in which the data is written by default in the array cooperation stage is the hard disk 5.
Alternatively, in the array replacement stage, if the read data is on the hard disk 5 with an old slow response, after the data is read from the hard disk 5, the read data is written again on the hard disk 6, and the storage address of the data is updated from the hard disk 5 to the hard disk 6.
In the above embodiment, when all the storage addresses are located on the hard disk 6, the array replacement stage is exited, and at this time, the hard disk 6 replaces the hard disk 5 to become a member hard disk of the hard disk array a.
Optionally, in the array replacement stage, when the written data is hashed on the hard disk 5 and the hard disk 6, if the hard disk 5 fails to cause the data loss of the hard disk 5, the member hard disk of the hard disk array a may be used to recover the data of the hard disk 5, taking fig. 5 as an example: in the process of copying the data of the hard disk 5 to the hard disk 6, if the data is lost due to the hard disk 5 failure, the data of the hard disk 5 can be determined by using the exclusive-or calculation results of the remaining four hard disks contained in the hard disk array a, specifically, the storage position of the data is determined to be located in the hard disk 5 according to the storage record, the exclusive-or calculation results of the remaining four hard disks are calculated, the calculation results are written into the hard disk 6, and the storage address of the data is modified into the hard disk 6. The above steps are repeated until all the data is located on the hard disk 6.
Alternatively, in the array collaboration phase, if the response time of the hard disk 5 is recovered, the array exit phase is entered, in which all data of the hard disk 6 may be copied to the hard disk 5 in batches through one stripe window.
A specific data copying process may be described with reference to fig. 6, and as shown in fig. 6, taking a block size of 256K, the copy window (i.e. the above-mentioned data stripe window) contains 8 data blocks, and the data of the hard disk 6 is copied to the hard disk 5.
Traversing the storage address, finding the first data block located in the hard disk 6, continuing to find up to 8 continuous data blocks, copying 8 blocks at the same time, calculating the starting address and length, and using all the blocks by 256K. The read data of the hard disk 6 is written to the hard disk 5. Repeating the steps 2 and 3 until the copy window moves to the tail.
During the array exit phase, a process of determining which hard disk the data is written to according to the stripe window (i.e. the data stripe window) can be described with reference to fig. 7, and as shown in fig. 7, the method specifically includes the following steps:
step S701: storing the write data to the hard disk 5 and the hard disk 6 when the position of the write data is smaller than the minimum value of the copy window;
Step S702: waiting for copying to finish when the position of the written data is positioned in the window range of the copying window;
step S703: when the position of the written data is larger than the maximum value of the copy window, the copying is waited for, and the written data is stored to the hard disk 5.
In the above steps, the data storage locations may be described by persistent data.
Through the above embodiment, a similar RAID structure is provided, and the characteristics that the reconstructed metadata is actually a subset of the formatted metadata are utilized, and the formatted metadata memory space and the data form are utilized to reconstruct, so that the stripe is reconstructed only once as far as possible while the reconstruction can be performed online, and the occupation of the persistent memory space of the RAID is reduced. When the response of the member hard disk is slow based on the RAID-like structure, the cooperative hard disk of the member hard disk is provided, and the hard disk with slow response and the cooperative hard disk jointly bear the service of the hard disk which should be responded by the slow response. In addition, in the data transfer process between the hard disk with slow response and the cooperative hard disk, a hard disk replacement mode of replacing the traditional kick disk reconstruction by using data copying is adopted, so that the data reading quantity of the array is obviously reduced, the time for replacing the hard disk is reduced, and the performance of replacing the hard disk is improved.
Further, an embodiment of the present application further provides a processing device for a read-write task based on a hard disk array, including:
the first determining module is used for determining a sub hard disk to be accelerated, of which the data read-write time delay is smaller than a preset value, from a plurality of sub hard disks contained in the current hard disk array;
the second determining module is used for determining a hard disk acceleration array for accelerating the sub hard disk to be accelerated, wherein the hard disk acceleration array is generated according to the sub hard disk to be accelerated and a cooperative hard disk corresponding to the sub hard disk to be accelerated;
and the processing module is used for processing the target read-write task of the sub hard disk to be accelerated by using the hard disk acceleration array.
By the device, the sub hard disk to be accelerated, of which the data read-write time delay is smaller than a preset value, is determined from a plurality of sub hard disks contained in the current hard disk array; determining a hard disk acceleration array for accelerating the sub hard disk to be accelerated, wherein the hard disk acceleration array is generated according to the sub hard disk to be accelerated and a cooperative hard disk corresponding to the sub hard disk to be accelerated; and processing the target read-write task of the sub hard disk to be accelerated by using the hard disk acceleration array. Therefore, aiming at the hard disk array comprising the hard disk with slow response, the technical problem of how to improve the processing performance of the read-write task of the hard disk array is solved, the performance of the hard disk array when processing the read-write task is improved, and the processing efficiency of the read-write task is further improved.
The read-write delay may represent a complete read-write process of a set of data, i.e., a data writing process, generally including reading an address to be written and writing data to the address to be written; if only the read process is involved, this is denoted as read latency.
In an exemplary embodiment, the first determining module is further configured to implement the following steps: acquiring data read-write time delay of the plurality of sub-hard disks when executing read-write tasks; determining a first sub hard disk with data read-write time delay larger than a first preset value from the plurality of sub hard disks; the first preset value represents the minimum data read-write time delay of the hard disk; determining a second sub hard disk with the maximum data read-write time delay from the plurality of first sub hard disks; and if the maximum data read-write time delay is smaller than a second preset value, determining the second sub hard disk as the sub hard disk to be accelerated, wherein the second preset value represents the historical data read-write time delay when the hard disk fails.
Through the above embodiment, the first preset value is, for example, 0, so that it can be determined that the plurality of sub-hard disks included in the current hard disk array are all normal operation hard disks, and then determine the sub-hard disks to be accelerated with larger data read-write delay in the normal operation process.
In an exemplary embodiment, the above second determining module is further configured to determine the collaborative hard disk by one of the following: determining a hard disk block unit of each sub hard disk in the plurality of sub hard disks, wherein the hard disk block of each sub hard disk at least comprises a data block unit, a check block unit and a redundancy block unit; determining the sum of the spaces of the redundant block units of the plurality of sub-hard disks as the redundant hard disk space of the current hard disk array, and determining the cooperative hard disk according to the redundant hard disk space; determining a peripheral hard disk with the same hard disk type as the child hard disk to be accelerated, and determining the peripheral hard disk as the collaborative hard disk; wherein the peripheral hard disk represents a hard disk other than the plurality of sub hard disks.
The data partitioning unit, for example, represents a storage space of data, and may be preset based on requirements, for example, 1M.
In an exemplary embodiment, the processing module is further configured to implement the following steps: determining the data read-write time delay of the sub hard disk to be accelerated and the data read-write time delay of the cooperative hard disk; determining the hard disk with smaller data read-write time delay in the sub hard disk to be accelerated and the collaborative hard disk as a data write hard disk; executing the target read-write task by using the data writing hard disk, and storing first data written by the target read-write task to the data writing hard disk; and determining a storage address of the first data in the data writing hard disk, and storing the storage address to the sub hard disk to be accelerated.
In an exemplary embodiment, further, the processing module further includes a record generating unit, configured to implement the following steps: executing the target read-write task by using the sub hard disk to be accelerated; storing the second data written by the target read-write task to the sub hard disk to be accelerated; the second data written by the target read-write task correspondingly has a first storage address in the sub hard disk to be accelerated; and generating a storage record of the second data according to the first storage address.
By setting the sub-hard disk to be accelerated as the initial hard disk for data writing, the storage address for data writing can be kept unchanged when the target read-write task of the sub-hard disk to be accelerated is processed by using the hard disk acceleration array, so that the probability of data loss is reduced.
In an exemplary embodiment, the record generating unit is further configured to obtain a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the first storage address, and determining the binding result as a storage record of the second data; and storing the storage record into a storage space of the hard disk acceleration array.
In an exemplary embodiment, the record generating unit is further configured to perform a persistence operation on the storage record to obtain persistence data; determining storage spaces set for a plurality of controllers under the condition that the hard disk acceleration array corresponds to the plurality of controllers, wherein the storage spaces comprise a main storage space and a backup storage space; and respectively storing the persistent data into the main storage space and the backup storage space.
In an exemplary embodiment, the record generating unit is further configured to store the second data written by the target read-write task to the sub-hard disk to be accelerated, and further perform the following steps: acquiring a first transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk under the condition that the monitored data read-write time delay of the sub hard disk to be accelerated is larger than a third preset value; the third preset value represents the average value of data read-write time delays of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; and transferring the data of the child hard disk to be accelerated according to the first transfer instruction.
Optionally, under the condition that the monitored data read-write time delay of the sub hard disk to be accelerated is smaller than a third preset value, a first transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk is not acquired, and the second data written by the target read-write task is continuously stored to the sub hard disk to be accelerated.
In an exemplary embodiment, the record generating unit is further configured to: sending prompt information for prompting the data of the child hard disk to be accelerated to be transferred to the collaborative hard disk to a target object; if the response information sent by the target object is received, and the response information contains a second transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk, transferring the data of the sub hard disk to be accelerated according to the first transfer instruction includes: and transferring the data of the child hard disk to be accelerated according to the second transfer instruction.
Through the embodiment, when the hard disk with slow response cannot be recovered for a long time, a prompt can be sent to a user according to the degree of slow response, the user can confirm that the hard disk with slow response is replaced by the cooperative hard disk according to requirements, and the data of the sub hard disk to be accelerated is passively transferred through the user instruction, so that the credibility of the data transfer process is improved.
In an exemplary embodiment, the record generating unit is further configured to: determining a data transmission channel of the sub hard disk to be accelerated for transmitting the second data; if the response time of the sub hard disk to be accelerated is determined to be larger than a fifth preset value under the condition that the data transmission rate of the data transmission channel is monitored to be smaller than the fourth preset value, generating a third transfer instruction for transferring the data of the sub hard disk to be accelerated to the collaborative hard disk; the fourth preset value represents the average value of the data transmission rates of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks, and the fifth preset value represents the average value of the response times of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; transferring the data of the child hard disk to be accelerated according to the first transfer instruction comprises the following steps: and transferring the data of the child hard disk to be accelerated according to the third transfer instruction.
Optionally, if the data transmission rate of the data transmission channel is greater than a fourth preset value, a first transfer instruction for indicating to transfer the data of the child hard disk to be accelerated to the collaborative hard disk is not generated; and under the condition that the response time of the sub hard disk to be accelerated is smaller than a fifth preset value, a third transfer instruction for transferring the data of the sub hard disk to be accelerated to the collaborative hard disk is not generated.
Through the embodiment, when the hard disk with slow response cannot be recovered for a long time, the data of the child hard disk to be accelerated can be actively transferred, and the transfer efficiency of the data transfer process is improved.
In an exemplary embodiment, the record generating unit is further configured to determine a data transmission rate at which the data transmission channel is monitored by: under the condition that response information generated when the sub hard disk to be accelerated executes the target read-write task is monitored, determining the execution time of the sub hard disk to be accelerated for executing the target read-write task; determining the total data quantity transmitted by the sub hard disk to be accelerated through the data transmission channel in the execution time; and determining the difference between the total data amount and the execution time as the data transmission rate.
Further, in an exemplary embodiment, the record generating unit is further configured to: under the condition that the target read-write task is determined to be reading data, acquiring the second data from the sub hard disk to be accelerated according to the first storage address; writing the second data into the cooperative hard disk, and determining a second storage address of the second data in the cooperative hard disk; generating a storage record of the second data according to the second storage address; storing third data written by the target read-write task to the collaborative hard disk under the condition that the target read-write task is determined to be the written data; the third data written by the target read-write task correspondingly have a third storage address in the collaborative hard disk; determining fourth data written into the child hard disk to be accelerated before the writing time of the third data, and writing the fourth data into the collaborative hard disk; and determining a fourth storage address of the fourth data in the collaborative hard disk, and generating storage records of the third data and the fourth data according to the third storage address and the third storage address.
Optionally, in the above embodiment, the record generating unit is further configured to: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the first storage address, and determining the binding result as a storage record of the second data; and storing the storage record into a storage space of the hardware acceleration array.
Optionally, in the above embodiment, the record generating unit is further configured to: acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array; binding the hard disk number and the third storage address, and determining the binding result as a storage record of the fourth data; and storing the storage record into a storage space of the hardware acceleration array.
In an exemplary embodiment, the record generating unit is further configured to: determining a first storage area of the child hard disk to be accelerated, wherein the first storage area represents a storage space corresponding to the first storage address; determining a second non-storage area of the sub hard disk to be accelerated, wherein the second non-storage area represents a storage space corresponding to other storage addresses except the first storage address in the storage record; and transferring the data in the storage area of the sub hard disk to be accelerated according to the first transfer instruction under the condition that the effective data amount of the non-storage area is smaller than a sixth preset value.
Alternatively, in the above-described embodiment, the effective data amount may be expressed as a data amount of non-empty data other than the storage area itself inherent information, which is system information of the storage area.
The sixth preset value may be preset based on requirements, for example, 1 byte, where the sixth preset value indicates a value when valid data is not stored.
Optionally, under the condition that the effective data amount of the non-storage area is determined to be larger than a sixth preset value, transferring the data in the storage area of the sub hard disk to be accelerated is not performed according to the first transfer instruction.
In an exemplary embodiment, the record generating unit is further configured to: under the condition that the child hard disk to be accelerated is determined to have faults, determining fifth data lost when the child hard disk to be accelerated has faults according to the data of the other child hard disks; writing the fifth data into the cooperative hard disk, and determining a fifth storage address of the fifth data in the cooperative hard disk; and generating a storage record of the fifth data according to the fifth storage address.
In an exemplary embodiment, the record generating unit is further configured to, after transferring the data of the child hard disk to be accelerated according to the first transfer instruction, perform the following: under the condition that the target read-write task is determined to be executed, if the storage record is determined to be used for indicating that all data written by the target read-write task are stored in the collaborative hard disk, determining that the data of the sub hard disk to be accelerated are transferred to the collaborative hard disk; replacing the sub hard disk to be accelerated contained in the current hard disk array with the collaborative hard disk to obtain an updated target hard disk array; and processing the array read-write task of the current hard disk array by using the updated target hard disk array.
In an exemplary embodiment, the record generating unit is further configured to implement the following technical scheme: generating a fourth transfer instruction for transferring the data of the collaborative hard disk to the sub hard disk to be accelerated under the condition that the monitored response time of the sub hard disk to be accelerated is smaller than a seventh preset value; the seventh preset value represents the average value of response time of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks; and transferring the data of the collaborative hard disk according to the fourth transfer instruction.
In an exemplary embodiment, further, the record generating unit is further configured to: determining a second storage area of the collaborative hard disk, wherein the second storage area represents a storage space of data written into the collaborative hard disk; traversing the data blocks in the second storage area, and writing the data of the data blocks in the second storage area into the sub hard disk to be accelerated through a data stripe window; the data blocks in the second storage area are correspondingly provided with sixth storage addresses in the child hard disk to be accelerated; and generating a storage record of the data block in the second storage area according to the sixth storage address.
In an exemplary embodiment, the record generating unit is further configured to further traverse the data partition located in the second storage area; determining a plurality of groups of data blocks in the second storage area according to the traversing result, wherein a sequence exists among the plurality of groups of data blocks; and writing all data corresponding to the plurality of groups of data blocks into the child hard disk to be accelerated through the data stripe window according to the sequence.
In an exemplary embodiment, the record generating unit is further configured to describe an implementation process of writing all data corresponding to the plurality of groups of data blocks into the child hard disk to be accelerated through the data stripe window according to the sequence by: for each group of data blocks in the plurality of groups of data blocks, determining a group of data from the 1 st data block of each group of data blocks to the i th data block of each group of data blocks, wherein i represents the number of data blocks of each group of data blocks, i is a positive integer, and i is smaller than or equal to the number of data blocks allowed to pass through the data stripe window once; determining the writing sequence corresponding to each group of data blocks based on the sequence; and writing one group of data of each group of data blocks into the sub-hard disk to be accelerated through the data stripe window according to the writing sequence until all the data are written into the sub-hard disk to be accelerated through the data stripe window.
Optionally, in an embodiment, the record generating unit is further configured to determine, when receiving sixth data written by the target read-write task in a process of transferring data of the collaborative hard disk according to the fourth transfer instruction, a storage address allocated to the collaborative hard disk for the sixth data; wherein the sixth data has at least one data block in the allocated memory address; writing the sixth data into the sub-hard disk to be accelerated and the collaborative hard disk under the condition that the block serial number of the at least one data block in the collaborative hard disk is smaller than the window minimum value of a data transfer window; suspending writing the sixth data into the sub-hard disk to be accelerated or the collaborative hard disk under the condition that the block serial number of the at least one data block in the collaborative hard disk is determined to be larger than the minimum window value of the data transfer window and smaller than the maximum window value of the data transfer window; writing the sixth data into the sub-hard disk to be accelerated under the condition that the block serial number of the at least one data block in the cooperative hard disk is determined to be larger than the maximum window value of a data transfer window; the window value of the data transfer window represents the block sequence number in the data block being transferred in the collaborative hard disk, the window minimum value represents the minimum block sequence number in the data block being transferred in the collaborative hard disk, and the window maximum value represents the maximum block sequence number in the data block being transferred in the collaborative hard disk.
Embodiments of the present application also provide a computer readable storage medium having stored therein a computer program/instruction, wherein the computer program/instruction is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
In an exemplary embodiment, the computer program/instructions contain program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. When executed by the central processor 801, the computer program performs various functions provided by embodiments of the present application.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
Fig. 8 schematically shows a block diagram of a computer system of an electronic device for implementing an embodiment of the application.
It should be noted that, the computer system 800 of the electronic device shown in fig. 8 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 8, the computer system 800 includes a central processing unit 801 (Central Processing Unit, CPU) which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory 802 (ROM) or a program loaded from a storage section 808 into a random access Memory 803 (Random Access Memory, RAM). In the random access memory 803, various programs and data required for system operation are also stored. The central processing unit 801, the read only memory 802, and the random access memory 803 are connected to each other through a bus 804. An Input/Output interface 805 (i.e., an I/O interface) is also connected to the bus 804.
The following components are connected to the input/output interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a local area network card, modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the input/output interface 805 as needed. A removable medium 811 such as a hard disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 810, so that a computer program read out therefrom is installed into the storage section 808 as needed.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs according to embodiments of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The computer programs, when executed by the central processor 801, perform the various functions defined in the system of the present application.
According to still another aspect of the embodiment of the present application, there is also provided an electronic device for implementing the processing method of the read-write task based on the hard disk array. The electronic device of the present embodiment is shown in fig. 9, and comprises a memory 902 and a processor 904, the memory 902 having stored therein a computer program, the processor 904 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, it will be appreciated by those skilled in the art that the configuration shown in fig. 9 is merely illustrative, and the electronic device may be a service control device. Fig. 9 is not limited to the structure of the electronic device described above. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
The memory 902 may be used to store software programs and modules, such as program instructions/modules corresponding to the processing method and apparatus for a read-write task based on a hard disk array in the embodiment of the present application, and the processor 904 executes the software programs and modules stored in the memory 902 to perform various functional applications and data processing, that is, implement the processing method for the read-write task based on the hard disk array. The memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 902 may further include memory remotely located relative to the processor 904, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 902 may specifically, but not limited to, information such as a log for containing sensitive data. As an example, as shown in fig. 9, the memory 902 may include, but is not limited to, an OCS management module in the storage system. In addition, other module units in the above storage system may be included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 906 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission means 906 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 906 is a Radio Frequency (RF) module for communicating wirelessly with the internet.
In addition, the electronic device further includes: a display 908; and a connection bus 910 for connecting the respective module parts in the above-described electronic device.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present application should be included in the protection scope of the present application.

Claims (22)

1. A processing method of read-write task based on hard disk array is characterized by comprising the following steps:
determining a sub hard disk to be accelerated, of which the data read-write time delay is smaller than a preset value, from a plurality of sub hard disks contained in a current hard disk array;
determining a hard disk acceleration array for accelerating the sub hard disk to be accelerated, wherein the hard disk acceleration array is generated according to the sub hard disk to be accelerated and a cooperative hard disk corresponding to the sub hard disk to be accelerated;
and processing the target read-write task of the sub hard disk to be accelerated by using the hard disk acceleration array.
2. The method of claim 1, wherein determining the sub hard disk to be accelerated with the data read-write delay less than the preset value from the plurality of sub hard disks included in the current hard disk array comprises:
acquiring data read-write time delay of the plurality of sub-hard disks when executing read-write tasks;
Determining a first sub hard disk with data read-write time delay larger than a first preset value from the plurality of sub hard disks; the first preset value represents the minimum data read-write time delay of the hard disk;
determining a second sub hard disk with the maximum data read-write time delay from the plurality of first sub hard disks;
and if the maximum data read-write time delay is smaller than a second preset value, determining the second sub hard disk as the sub hard disk to be accelerated, wherein the second preset value represents the historical data read-write time delay when the hard disk fails.
3. The method of claim 1, wherein prior to determining a hard disk acceleration array for accelerating the sub-hard disk to be accelerated, determining the cooperating hard disk is performed by one of:
determining a hard disk block unit of each sub hard disk in the plurality of sub hard disks, wherein the hard disk block of each sub hard disk at least comprises a data block unit, a check block unit and a redundancy block unit; determining the sum of the spaces of the redundant block units of the plurality of sub-hard disks as the redundant hard disk space of the current hard disk array, and determining the cooperative hard disk according to the redundant hard disk space;
Determining a peripheral hard disk with the same hard disk type as the child hard disk to be accelerated, and determining the peripheral hard disk as the collaborative hard disk; wherein the peripheral hard disk represents a hard disk other than the plurality of sub hard disks.
4. The method of claim 1, wherein processing the target read-write task of the child hard disk to be accelerated using the hard disk acceleration array comprises:
determining the data read-write time delay of the sub hard disk to be accelerated and the data read-write time delay of the cooperative hard disk;
determining the hard disk with smaller data read-write time delay in the sub hard disk to be accelerated and the collaborative hard disk as a data write hard disk;
executing the target read-write task by using the data writing hard disk, and storing first data written by the target read-write task to the data writing hard disk;
and determining a storage address of the first data in the data writing hard disk, and storing the storage address to the sub hard disk to be accelerated.
5. The method of claim 1, wherein processing the target read-write task of the child hard disk to be accelerated using the hard disk acceleration array comprises:
Executing the target read-write task by using the sub hard disk to be accelerated;
storing the second data written by the target read-write task to the sub hard disk to be accelerated; the second data written by the target read-write task correspondingly has a first storage address in the sub hard disk to be accelerated;
and generating a storage record of the second data according to the first storage address.
6. The method of claim 5, wherein generating a storage record of the second data from the first storage address comprises:
acquiring a hard disk number of the child hard disk to be accelerated in the current hard disk array;
binding the hard disk number and the first storage address, and determining the binding result as a storage record of the second data;
and storing the storage record into a storage space of the hard disk acceleration array.
7. The method of claim 6, wherein storing the storage record to the storage space of the hard disk acceleration array comprises:
performing persistence operation on the storage record to obtain persistence data;
determining storage spaces set for a plurality of controllers under the condition that the hard disk acceleration array corresponds to the plurality of controllers, wherein the storage spaces comprise a main storage space and a backup storage space;
And respectively storing the persistent data into the main storage space and the backup storage space.
8. The method of claim 5, wherein after storing the second data written by the target read-write task to the child hard disk to be accelerated, the method further comprises:
acquiring a first transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk under the condition that the monitored data read-write time delay of the sub hard disk to be accelerated is larger than a third preset value; the third preset value represents the average value of data read-write time delays of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks;
and transferring the data of the child hard disk to be accelerated according to the first transfer instruction.
9. The method of claim 8, wherein obtaining a first transfer instruction for instructing transfer of data of the child hard disk to be accelerated to the cooperating hard disk comprises:
sending prompt information for prompting the data of the child hard disk to be accelerated to be transferred to the collaborative hard disk to a target object;
if the response information sent by the target object is received, and the response information contains a second transfer instruction for indicating to transfer the data of the sub hard disk to be accelerated to the collaborative hard disk, transferring the data of the sub hard disk to be accelerated according to the first transfer instruction includes: and transferring the data of the child hard disk to be accelerated according to the second transfer instruction.
10. The method of claim 8, wherein obtaining a first transfer instruction for instructing transfer of data of the child hard disk to be accelerated to the cooperating hard disk comprises:
determining a data transmission channel of the sub hard disk to be accelerated for transmitting the second data;
if the response time of the sub hard disk to be accelerated is determined to be larger than a fifth preset value under the condition that the data transmission rate of the data transmission channel is monitored to be smaller than the fourth preset value, generating a third transfer instruction for transferring the data of the sub hard disk to be accelerated to the collaborative hard disk;
the fourth preset value represents the average value of the data transmission rates of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks, and the fifth preset value represents the average value of the response times of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks;
transferring the data of the child hard disk to be accelerated according to the first transfer instruction comprises the following steps: and transferring the data of the child hard disk to be accelerated according to the third transfer instruction.
11. The method of claim 10, wherein the data transmission rate at which the data transmission channel is monitored is determined by:
Under the condition that response information generated when the sub hard disk to be accelerated executes the target read-write task is monitored, determining the execution time of the sub hard disk to be accelerated for executing the target read-write task;
determining the total data quantity transmitted by the sub hard disk to be accelerated through the data transmission channel in the execution time;
and determining the difference between the total data amount and the execution time as the data transmission rate.
12. The method of claim 8, wherein transferring the data of the child hard disk to be accelerated according to the first transfer instruction comprises:
under the condition that the target read-write task is determined to be reading data, acquiring the second data from the sub hard disk to be accelerated according to the first storage address;
writing the second data into the cooperative hard disk, and determining a second storage address of the second data in the cooperative hard disk;
generating a storage record of the second data according to the second storage address;
storing third data written by the target read-write task to the collaborative hard disk under the condition that the target read-write task is determined to be the written data; the third data written by the target read-write task correspondingly have a third storage address in the collaborative hard disk;
Determining fourth data written into the child hard disk to be accelerated before the writing time of the third data, and writing the fourth data into the collaborative hard disk;
and determining a fourth storage address of the fourth data in the collaborative hard disk, and generating storage records of the third data and the fourth data according to the third storage address and the third storage address.
13. The method of claim 8, wherein transferring the data of the child hard disk to be accelerated according to the first transfer instruction comprises:
determining a first storage area of the child hard disk to be accelerated, wherein the first storage area represents a storage space corresponding to the first storage address;
determining a second non-storage area of the sub hard disk to be accelerated, wherein the second non-storage area represents a storage space corresponding to other storage addresses except the first storage address in the storage record;
and transferring the data in the storage area of the sub hard disk to be accelerated according to the first transfer instruction under the condition that the effective data amount of the non-storage area is smaller than a sixth preset value.
14. The method of claim 8, wherein transferring the data of the child hard disk to be accelerated according to the first transfer instruction further comprises:
Under the condition that the child hard disk to be accelerated is determined to have faults, determining fifth data lost when the child hard disk to be accelerated has faults according to the data of the other child hard disks;
writing the fifth data into the cooperative hard disk, and determining a fifth storage address of the fifth data in the cooperative hard disk;
and generating a storage record of the fifth data according to the fifth storage address.
15. The method according to any one of claims 8 to 14, wherein after transferring the data of the child hard disk to be accelerated according to the first transfer instruction, the method further comprises:
under the condition that the target read-write task is determined to be executed, if the storage record is determined to be used for indicating that all data written by the target read-write task are stored in the collaborative hard disk, determining that the data of the sub hard disk to be accelerated are transferred to the collaborative hard disk;
replacing the sub hard disk to be accelerated contained in the current hard disk array with the collaborative hard disk to obtain an updated target hard disk array;
and processing the array read-write task of the current hard disk array by using the updated target hard disk array.
16. The method of claim 5, wherein after storing the second data written by the target read-write task to the child hard disk to be accelerated, the method further comprises:
generating a fourth transfer instruction for transferring the data of the collaborative hard disk to the sub hard disk to be accelerated under the condition that the monitored response time of the sub hard disk to be accelerated is smaller than a seventh preset value;
the seventh preset value represents the average value of response time of other sub-hard disks except the sub-hard disk to be accelerated in the plurality of sub-hard disks;
and transferring the data of the collaborative hard disk according to the fourth transfer instruction.
17. The method of claim 16, wherein transferring the data of the cooperating hard disk in accordance with the fourth transfer instruction comprises:
determining a second storage area of the collaborative hard disk, wherein the second storage area represents a storage space of data written into the collaborative hard disk;
traversing the data blocks in the second storage area, and writing the data of the data blocks in the second storage area into the sub hard disk to be accelerated through a data stripe window; the data blocks in the second storage area are correspondingly provided with sixth storage addresses in the child hard disk to be accelerated;
And generating a storage record of the data block in the second storage area according to the sixth storage address.
18. The method of claim 17, wherein traversing the data chunks in the second storage area to write the data of the data chunks located in the second storage area to the child hard disk to be accelerated through a data stripe window comprises:
traversing the data blocks in the second storage area;
determining a plurality of groups of data blocks in the second storage area according to the traversing result, wherein a sequence exists among the plurality of groups of data blocks;
and writing all data corresponding to the plurality of groups of data blocks into the child hard disk to be accelerated through the data stripe window according to the sequence.
19. The method of claim 18, wherein writing all data corresponding to the plurality of groups of data blocks into the sub-hard disk to be accelerated through the data stripe window according to the order, comprises:
for each group of data blocks in the plurality of groups of data blocks, determining a group of data from the 1 st data block of each group of data blocks to the i th data block of each group of data blocks, wherein i represents the number of data blocks of each group of data blocks, i is a positive integer, and i is smaller than or equal to the number of data blocks allowed to pass through the data stripe window once;
Determining the writing sequence corresponding to each group of data blocks based on the sequence;
and writing one group of data of each group of data blocks into the sub-hard disk to be accelerated through the data stripe window according to the writing sequence until all the data are written into the sub-hard disk to be accelerated through the data stripe window.
20. An array of hard disks, comprising:
the method comprises the steps of waiting for acceleration of a sub hard disk, wherein the sub hard disk waiting for acceleration corresponds to a cooperative hard disk;
the sub hard disk to be accelerated comprises a plurality of sub hard disks, wherein the data read-write time delay of the sub hard disk to be accelerated is smaller than a preset value, and the data read-write time delay is determined from the plurality of sub hard disks contained in the current hard disk array; and the hard disk array is used for processing the target read-write task of the sub hard disk to be accelerated.
21. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 19.
22. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 19 when the computer program is executed.
CN202310951712.0A 2023-07-31 2023-07-31 Processing method of read-write task based on hard disk array and electronic equipment Active CN116661708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310951712.0A CN116661708B (en) 2023-07-31 2023-07-31 Processing method of read-write task based on hard disk array and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310951712.0A CN116661708B (en) 2023-07-31 2023-07-31 Processing method of read-write task based on hard disk array and electronic equipment

Publications (2)

Publication Number Publication Date
CN116661708A true CN116661708A (en) 2023-08-29
CN116661708B CN116661708B (en) 2023-11-03

Family

ID=87710179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310951712.0A Active CN116661708B (en) 2023-07-31 2023-07-31 Processing method of read-write task based on hard disk array and electronic equipment

Country Status (1)

Country Link
CN (1) CN116661708B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8049980B1 (en) * 2008-04-18 2011-11-01 Network Appliance, Inc. Partial disk failures and improved storage resiliency
CN103699340A (en) * 2013-12-16 2014-04-02 华为数字技术(苏州)有限公司 Request processing method and equipment
CN105094685A (en) * 2014-04-29 2015-11-25 国际商业机器公司 Method and device for storage control
CN106610788A (en) * 2015-10-26 2017-05-03 华为技术有限公司 Hard disk array control method and device
US10372354B1 (en) * 2017-06-15 2019-08-06 Nutanix, Inc. High reliability in storage array systems
CN112286453A (en) * 2020-10-26 2021-01-29 苏州浪潮智能科技有限公司 Disk array data reading and writing method, device and storage medium
CN113504876A (en) * 2021-07-09 2021-10-15 杭州华澜微电子股份有限公司 Data writing method and device, data reading method and device, and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8049980B1 (en) * 2008-04-18 2011-11-01 Network Appliance, Inc. Partial disk failures and improved storage resiliency
CN103699340A (en) * 2013-12-16 2014-04-02 华为数字技术(苏州)有限公司 Request processing method and equipment
CN105094685A (en) * 2014-04-29 2015-11-25 国际商业机器公司 Method and device for storage control
CN106610788A (en) * 2015-10-26 2017-05-03 华为技术有限公司 Hard disk array control method and device
US10372354B1 (en) * 2017-06-15 2019-08-06 Nutanix, Inc. High reliability in storage array systems
CN112286453A (en) * 2020-10-26 2021-01-29 苏州浪潮智能科技有限公司 Disk array data reading and writing method, device and storage medium
CN113504876A (en) * 2021-07-09 2021-10-15 杭州华澜微电子股份有限公司 Data writing method and device, data reading method and device, and electronic device

Also Published As

Publication number Publication date
CN116661708B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN101655813B (en) Storage system
EP2254036B1 (en) Storage apparatus and data copy method
US20070276884A1 (en) Method and apparatus for managing backup data and journal
US20050071393A1 (en) Data storage subsystem
CN110515724B (en) Resource allocation method, device, monitor and machine-readable storage medium
US20070277012A1 (en) Method and apparatus for managing backup data and journal
US8626722B2 (en) Consolidating session information for a cluster of sessions in a coupled session environment
CN111274252B (en) Block chain data uplink method and device, storage medium and server
CN110807064B (en) Data recovery device in RAC distributed database cluster system
CN107809326B (en) Data consistency processing method, device and equipment
US10346066B2 (en) Efficient erasure coding of large data objects
WO2009071573A1 (en) Method of determining whether to use a full volume or repository for a logical copy backup space and apparatus therefor
US10929043B2 (en) Space reservation for distributed storage systems
CN108205573B (en) Data distributed storage method and system
CN109582213A (en) Data reconstruction method and device, data-storage system
US8504786B2 (en) Method and apparatus for backing up storage system data
CN115167782B (en) Temporary storage copy management method, system, equipment and storage medium
CN110121694B (en) Log management method, server and database system
CN111291062A (en) Data synchronous writing method and device, computer equipment and storage medium
CN116661708B (en) Processing method of read-write task based on hard disk array and electronic equipment
US11340826B2 (en) Systems and methods for strong write consistency when replicating data
CN115344214A (en) Data reading and writing method, device, server and computer readable storage medium
CN115470041A (en) Data disaster recovery management method and device
CN109213621B (en) Data processing method and data processing equipment
KR20150083276A (en) Remote Memory Data Management Method and System for Data Processing Based on Mass Memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant