WO2018103533A1 - 一种故障处理的方法、装置和设备 - Google Patents

一种故障处理的方法、装置和设备 Download PDF

Info

Publication number
WO2018103533A1
WO2018103533A1 PCT/CN2017/112358 CN2017112358W WO2018103533A1 WO 2018103533 A1 WO2018103533 A1 WO 2018103533A1 CN 2017112358 W CN2017112358 W CN 2017112358W WO 2018103533 A1 WO2018103533 A1 WO 2018103533A1
Authority
WO
WIPO (PCT)
Prior art keywords
hard disk
disk
raid
hot spare
idle
Prior art date
Application number
PCT/CN2017/112358
Other languages
English (en)
French (fr)
Inventor
李思聪
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018103533A1 publication Critical patent/WO2018103533A1/zh
Priority to US16/362,196 priority Critical patent/US20190220379A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • G06F11/1088Reconstruction on already foreseen single or plurality of spare disks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1608Error detection by comparing the output signals of redundant hardware
    • G06F11/1612Error detection by comparing the output signals of redundant hardware where the redundant component is persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/82Solving problems relating to consistency

Definitions

  • the management of a RAID group is usually implemented by a RAID controller.
  • the configuration policies of the RAID group are mainly divided into RAID0, RAID1, RAID2, RAID3, RAID4, RAID5, RAID6, RAID7, RAID10, and RAID50.
  • the policy needs to be configured as N+M mode. Both N and M are positive integers greater than 1, N is the number of data disks, and M is the number of parity disks.
  • the hot spare disk is also configured in the RAID group.
  • the RAID controller can restore the data on the failed hard disk to the hot standby according to the check data in the parity disk and the data in the data disk. On the plate, to improve system reliability.
  • the hot spare resource pool may be composed of at least one of a logical hard disk and a physical hard disk.
  • the storage node may also include a RAID controller, where the RAID controller uses a plurality of hard disks in the storage node to form a RAID group, and divides the RAID group into multiple logical hard disks, and sends unused logical hard disk information to the RAID controller.
  • a RAID controller of the service node where the logical hard disk information includes the capacity, type, and Information such as the logical hard disk ID and the RAID group to which the logical hard disk belongs.
  • the RAID controller can determine the first hot spare resource pool in any of the following ways:
  • Manner 1 The RAID controller selects one hot spare disk resource pool as the first hot spare disk resource pool in the hot spare disk resource pool in the one or more hot spare disk resource pools that match the RAID group.
  • Manner 2 The RAID controller randomly selects one hot spare disk resource pool as the first hot spare disk resource pool in one or more hot spare disk resource pools that match the RAID group.
  • the capacity of the idle hard disk in the first hot spare disk resource pool is greater than or equal to the capacity of the failed hard disk, and the type of the idle hard disk in the first hot spare disk resource pool is the same as the type of the failed hard disk.
  • the RAID controller may determine the first idle hard disk as the hot spare disk according to any one of the following manners:
  • Manner 1 The RAID controller selects an idle hard disk as the first idle hard disk according to the identifier of the hard disk in the first hot spare disk resource pool.
  • Manner 2 The RAID controller randomly selects an idle hard disk as the first idle hard disk in the first hot spare disk resource pool.
  • the RAID controller selects a hot spare disk resource pool that matches the RAID group. After the idle hard disk is selected, the RAID controller needs to determine the state of the idle hard disk as unused by the storage controller corresponding to the idle hard disk to start the data recovery process of the failed hard disk. The process of confirming the status is as follows: the RAID controller is storing to the storage device. The controller sends a first request message, where the first request message is used to determine a status of the selected idle hard disk; when receiving a response result indicating that the state of the idle hard disk selected by the RAID controller is an unused first request message The RAID controller mounts the selected idle hard disk to the local area and performs fault data recovery processing of the RAID group.
  • the RAID controller of the service node uses the idle hard disk of the storage node to form a hot spare disk resource pool, and establishes a mapping relationship between the RAID group and the hot spare disk resource pool.
  • the hot spare disk is selected from the hot spare disk pool.
  • the number of storage nodes can be increased according to service requirements.
  • the number of hard disks in the resource pool can be expanded infinitely, which solves the problem of limited number of hot spare disks in the prior art and improves system reliability.
  • the local hard disk of the service node can be used to set up a RAID group to improve the local hard disk usage.
  • the present invention provides a device for fault processing, the device comprising a processor, a memory, a communication interface, and a bus, wherein the processor, the memory, and the communication interface are connected by a bus and complete communication with each other, the processing Means for storing computer execution instructions, the processor executing computer instructions in the memory to perform the first aspect or any possible implementation of the first aspect with hardware resources in the device The method described.
  • the present invention provides a fault processing device, where the device includes a RAID card, a memory, a communication interface, and a bus, and the RAID card includes a RAID controller and a memory, and the RAID controller and the RAID card are in a memory.
  • the RAID card, the memory, and the communication interface communicate with each other through a bus, wherein the memory of the RAID card is used to store a computer execution instruction, and when the device is running, the RAID controller executes the RAID card.
  • a computer in memory executes instructions to perform the method of the first aspect or any of the possible implementations of the first aspect with hardware resources in the device.
  • the data processing method, device, and device provided by the present application implement a hot spare disk resource pool by using an idle hard disk of a storage node across the network, and establish a mapping between the hot spare disk resource pool and each RAID group. Relationship: When any RAID group fails, you can select one of the hot spare disk pools in the hot spare disk resource pool as the hot spare disk for fault data recovery and hot spare disk. The number of idle disks in the resource pool can be adjusted according to the service requirements. This solves the problem of system reliability caused by the limited number of hard disks in the hot spare disk resource pool in the prior art. On the other hand, all local hard disks of the service node can be used for data disks and parity disks of the RAID group, which improves the utilization of the local hard disk.
  • FIG. 2 is a schematic flowchart of a method for fault processing according to an embodiment of the present invention
  • 3A is a schematic flowchart diagram of another method for troubleshooting a fault according to an embodiment of the present invention.
  • FIG. 3B is a schematic flowchart diagram of another method for fault processing according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a device for fault processing according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a device for fault processing according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of another apparatus for fault processing according to an embodiment of the present invention.
  • the service node and the storage node can communicate via Ethernet, or through lossless Ethernet Data Center Bridging (DCB) and wireless, which support Remote Direct Memory Access (RDMA).
  • DCB Data Center Bridging
  • RDMA Remote Direct Memory Access
  • Bandwidth InfiniBand, IB communicates.
  • the data exchange between the RAID controller and the hot spare resource pool is performed through a standard network storage protocol.
  • the storage protocol may be a Non-Volatile Memory Express OverFabric (NoF) protocol. It can also be an iSER (iSCSIExtensions for RDMA, iSER) protocol for transferring commands and data of the Small Computer System Interface (iSCSI) protocol via RDMA, or for passing commands and data of the SCSI protocol through RDMA.
  • iSER iSCSIExtensions for RDMA, iSER
  • SRP Small Computer System Interface RDMA Protocol
  • a service node can be a server that provides computing resources (such as CPU and memory), network resources (such as network cards), and storage resources (such as hard disks) to a user's application.
  • Each of the service nodes includes a RAID controller.
  • the RAID controller can be configured into one or more disk groups according to different configuration policies.
  • the configuration policy is mainly divided into RAID0, RAID1, RAID2, RAID3, RAID4, and RAID5.
  • RAID6, RAID7, RAID10, and RAID50 where the configuration strategy of RAID3 or higher needs to be configured in N+M mode, N and M are positive integers greater than 1, and N indicates the data disk in which data is stored in member disks of the RAID group.
  • the number, M indicates the number of parity disks in which the check code is stored in the member hard disk of the RAID group.
  • a RAID group is created according to the configuration policy of the RAID 5 by using five hard disks in the service node.
  • the local hard disk refers to a hard disk in the same server as the RAID controller.
  • the hard disk 11 shown in FIG. 1 and the hard disk 1 n may be referred to as a local hard disk of the service node 1.
  • the RAID controller records the member disk information of each RAID group into the metadata information.
  • the metadata information includes the configuration policy of each RAID group, the capacity and type of the member disks, and the RAID controller can be based on the metadata information. Monitor each RAID group.
  • the RAID controller can be implemented by a dedicated RAID card or by a processor of a service node.
  • the metadata information is stored in the memory of the RAID card.
  • the RAID controller function is implemented by the processor of the service node, the metadata information is stored in the memory of the service node.
  • the memory may be a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program code.
  • the processor may be a CPU, and the processor may also be other general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates. Or transistor logic devices, discrete hardware components, and so on.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the hard disk of the service node can be divided into two categories: Solid State Disk (SSD) and Hard Disk Drive (HDD).
  • the HDD can be further subdivided into the following according to different data interfaces.
  • Types Advanced Technology Attachment (ATA) hard disk, Small Computer System Interface (SCSI) hard disk, SAS (SerialAttached SCSI, SAS) hard disk, SATA (Serial ATA, SATA) hard disk.
  • ATA Advanced Technology Attachment
  • SCSI Small Computer System Interface
  • SAS SerialAttached SCSI, SAS
  • SATA Serial ATA
  • Each type of hard disk has different attributes such as interface, size, and hard disk read/write speed.
  • a storage node can be a server or a storage array that is used to provide storage resources for a user's application.
  • the storage node is further configured to provide a hot spare disk resource pool for the RAID group of the service node, where each storage node includes a storage controller and at least one hard disk, which is the same as the service node, and the hard disk type of the storage node may also be divided.
  • SSD Serial Advanced Technology Attachment
  • SCSI Serial Bus Component Interconnect Express
  • the hard disk of the same storage node can be used to provide storage resources for a specified application, in addition to a spare hard disk for providing a hot spare disk resource pool.
  • a part of the hard disk of the storage node is also used as a storage for storing the ORACLE database.
  • each storage controller can collect the information of the idle hard disk of the storage node where it is located.
  • the RAID controller of the service node collects the information of the idle hard disks of each storage node, and forms the idle hard disk into a hot spare disk resource pool.
  • the storage node 11 includes a hard disk 111, a hard disk 112, ..., a hard disk 11 n.
  • the storage node 12 includes a hard disk 121, a hard disk 122, ..., a hard disk 12n, and the storage node 1N includes a hard disk 1N1.
  • the idle hard disk in the storage node 12 is the hard disk 121 and the hard disk 122
  • the idle hard disk in the storage node 13 is the hard disk 1Nn.
  • the RAID controller of any service node in the fault processing system can obtain the information of the idle hard disk in each storage node through the network, wherein the idle hard disk includes the hard disk 111 of the storage node 11, the hard disk 112, ..., the hard disk 11n; And the hard disk 121 and the hard disk 122 of the storage node 12; the hard disk 1Nn of the storage node 13.
  • the information of the idle hard disk includes the capacity and type of each hard disk.
  • the type of the hard disk 111 is a SAS disk and the capacity is 300G.
  • the hot spare resource pool may also be composed of a logical hard disk.
  • the storage node may also include a RAID controller, where the RAID controller uses a plurality of hard disks in the storage node to form a RAID group, and divides the RAID group into multiple logical hard disks, and sends information of unused logical hard disks.
  • the RAID controller of the service node wherein the information of the logical hard disk includes information such as the capacity and type of the logical hard disk, the logical hard disk identifier, and the RAID group to which the logical hard disk belongs.
  • the hot spare disk resource pool may also include a physical hard disk and a logical hard disk, that is, part of the storage node
  • the idle hard disk is a physical hard disk.
  • the free hard disk provided by some storage nodes is a logical hard disk.
  • the RAID controller of the service node can distinguish different types of hard disks according to the type, so as to create different hot spare disk resource pools.
  • the fault processing system shown in FIG. 1 is only an example, wherein the number and type of hard disks of different service nodes in the fault handling system do not constitute a limitation of the present invention; the number and type of hard disks of different storage nodes do not constitute Limitations of the invention. Moreover, the number of service nodes and storage nodes may or may not be equal.
  • the information of the idle hard disk further includes information about the fault domain of the hard disk, and the fault domain is used to identify the relationship of the area where the different hard disks are located, and the different hard disks in the same fault domain may simultaneously cause a fault. Data is lost, and different hard disks in different fault domains will not cause data loss when they fail at the same time.
  • the area may be a physical area, that is, a different area divided according to the physical location of the storage node where the hard disk is located, and the physical location may be at least one of a rack, a cabinet, and a chassis in which the storage node is located, and storage nodes in two different areas.
  • the hard disks in the two areas belong to different fault domains; when the storage nodes of two different areas or the components of the storage nodes fail at the same time, it will cause If the data is lost, the hard disks in these two areas are said to belong to the same fault domain.
  • Table 1 is an example of a storage node physical location identifier. As shown in the table, if the storage nodes of the same cabinet share a set of power supply devices, when all the storage nodes of the same cabinet fail, the storage cabinet fails. If the hard disks of different storage nodes in the same cabinet belong to the same fault domain, and the hard disks of different storage nodes that are not in the same cabinet belong to different fault domains, the storage node 1 and storage node 2 are located in the same cabinet in the same rack.
  • the hard disks of the storage node 1 and the storage node 2 belong to the same fault domain, that is, when the power supply device fails, the nodes in the storage node 1 and the storage node 2 cannot work normally, and the storage node 1 and the storage are operated. The application on node 2 is affected. Then, the storage disks 1 and storage node 2 are in the same fault domain.
  • Storage node 1 and storage node 3 are located in different cabinets and chassis in the same rack. When the power supply of the cabinet 1 in 1 fails, storage node 1 cannot work normally, and storage node 3 has no effect. Then, storage node 1 The hard disk storage node 3 belongs to a different failure domain.
  • the area where the hard disk is located may also be a logical area.
  • the storage node where the hard disk is located is divided into different logical areas according to the preset policy, so that the storage nodes of different logical areas or the components of the storage node (such as the network card, the hard disk, etc.) fail to affect the normal operation of the application, and the same logical area.
  • a component failure of a storage node or a storage node affects a service application.
  • the preset policy may be to divide the storage node into different logical areas according to service requirements. For example, the hard disks in the same storage node are divided into one logical area, and the hard disks of different logical nodes are divided into different logical areas. Therefore, when a single storage node fails or the components of the storage node fail, the normal operation of other storage nodes is not affected. .
  • the fault processing method provided by the present invention is exemplified by a fault processing system including a service node and a storage node dedicated to providing a free hard disk.
  • a method for fault processing according to an embodiment of the present invention is further explained in conjunction with FIG. 2, as shown in the figure, the method includes:
  • the storage controller acquires information about the idle hard disk in the storage node.
  • the information of the idle hard disk includes the type and capacity of the idle hard disk of the storage node where the storage controller is located.
  • the type of the hard disk is used to identify the type of the disk, such as SAS and SATA.
  • the free disk includes both the logical disk and the physical disk, the type of the disk can be further divided into a logical disk and a physical disk.
  • the size of the hard disk such as 300G, 600G.
  • the information of the idle hard disk further includes information of a fault domain of the hard disk.
  • a fault domain includes one or more hard disks.
  • the storage controller of each storage node may record the information of the idle hard disk of the storage node where the storage node is located by using the specified file, or record the information of the idle hard disk of the storage node where the storage controller is located by using the data table in the database. Further, the storage controller can periodically query the information of the idle hard disk of the storage node where it is located, and update its saved content.
  • the RAID controller acquires information about the idle hard disk.
  • the RAID controller of the service node sends a request message for obtaining the information of the idle hard disk to the storage controller, and the storage controller sends the information of the idle hard disk of the storage node to the RAID controller.
  • the RAID controller creates at least one hot spare resource pool according to the information of the idle hard disk.
  • the RAID controller can create one or more hot spare disk resource pools according to the type and/or capacity of the free hard disk in the information of the idle hard disk. For example, the RAID controller can create a hot spare disk resource pool according to the type of the idle hard disk. Create a hot spare disk resource pool or create a hot spare disk resource pool according to the type and capacity of the free disk. Record the hot spare disk resource pool information.
  • the idle hard disk in the storage node 1 in the fault processing system includes a hard disk 111 and a hard disk 112, each hard disk is a 300G SAS disk;
  • the idle hard disk in the storage node 2 includes a hard disk 121 and a hard disk 122, each of which is a 600G SAS Disk;
  • the free disk in the storage node 3 includes a hard disk 131 and a hard disk 132, each of which is a 500G SATA disk.
  • the RAID controller can create two hot spare disk resource pools according to the type of the free hard disk: the hot spare disk resource pool 1 includes the hard disk including the hard disk 111, the hard disk 112, the hard disk 121, and the hard disk.
  • the hot spare disk resource pool 2 includes hard disks 131 and 132, wherein different types of free hard disks in each hot spare disk resource pool are the same.
  • the RAID controller can also create a hot spare disk resource pool according to the capacity of the hard disk.
  • the RAID controller can create three hot spare disk resource pools: the hot spare disk resource pool 1 includes the hard disk including the hard disk 111 and the hard disk 112;
  • the spare disk resource pool 2 includes a hard disk 121 and a hard disk 122.
  • the hot spare disk resource pool 3 includes hard disks 131 and 132.
  • the capacity of different idle hard disks in each hot spare disk resource pool is the same.
  • the RAID controller can also create three hot spare disk resource pools according to the type and capacity of the hard disk: the hot spare disk resource pool 1 includes a hard disk 111 and a hard disk 112; the hot spare disk resource pool 2 includes a hard disk 121 and a hard disk 122;
  • the hot spare disk resource pool 3 includes a hard disk 131 and a hard disk 132.
  • the capacity and type of different idle hard disks in each hot spare disk resource pool are the same.
  • the idle disk provided by the storage node includes a physical hard disk and a logical hard disk
  • the type of the hard disk includes a physical hard disk and a logical hard disk.
  • the RAID controller creates a hot spare disk resource pool
  • the physical hard disk and the logical disk may be first selected.
  • the hard disk classifies the idle hard disks, and then further subdivides them according to the capacity of the hard disks to form different hot spare disk resource pools.
  • the RAID controller may further press Create one or more hot spare resource pools based on the capacity, type, and fault domain of the hard disk.
  • the capacity of the free disk in the hot spare disk pool is the same as that of the same fault domain.
  • the capacity of the free disk in the hot spare disk pool is the same as the fault domain.
  • the hot spare disk resource pool is created according to the type, capacity, and fault domain of the hard disk, and the information about the free hard disk in the storage node 1 is as shown in Table 2, the same capacity and type will be used, and the same fault domain will be used.
  • the hard disk is created as a hot spare disk resource pool.
  • the RAID controller can create three hot spare disk resource pools: the hot spare disk resource pool 1 includes the hard disk 11, the hard disk 12, and the hard disk.
  • the hot spare disk resource pool 2 includes a hard disk 31 and a hard disk 32.
  • the hot spare disk resource pool 3 includes a hard disk 43 and a hard disk 45.
  • the hard disk with the same capacity and type and different fault domains is created as a hot spare disk resource pool.
  • the RAID controller can create 3 hot spare disk resources.
  • the hot spare disk resource pool 1 includes a hard disk 11, a hard disk 31, and a hard disk 43.
  • the hot spare disk resource pool 2 includes a hard disk 12, a hard disk 32, and a hard disk 45.
  • the hot spare disk resource pool 3 includes a hard disk 21, wherein each hot spare device The capacity and type of the free hard disk in the disk resource pool are the same, and the fault domain of the hard disk is different.
  • the hot spare disk resource pool information is recorded by using the specified file or database.
  • the hot spare disk resource pool information includes the hot spare disk ID, the hard disk type and capacity, and the storage node where the hard disk is located. .
  • the hot spare disk resource pool may also include information about the area where the idle hard disk is located.
  • Table 3 is an example of hot spare disk resource pool information created by the RAID controller according to the information of the idle hard disk shown in Table 2. As shown in the table, the RAID controller records the hot spare disk resource pool information, where The hot spare disk resource pool ID, the free hard disk ID, the hard disk capacity, the hard disk type, the storage node where the hard disk is located, and the hard disk area.
  • the RAID controller determines at least one hot spare disk resource pool that matches the RAID group according to the information of the idle hard disk in the hot spare disk resource pool, and records at least one hot spare disk that matches the RAID group.
  • the mapping relationship of resource pools is
  • the hot spare disk resource pool that matches the RAID group is determined according to the type and capacity of the free hard disk in the hot spare disk resource pool.
  • the hot spare disk resource pool and the RAID group match are hot spare disks.
  • the capacity of the free disk in the resource pool is greater than or equal to the capacity of the member disk in the RAID group.
  • the type of the disk in the hot spare disk pool is the same as that of the member disk in the RAID group.
  • the mapping relationship between the hot spare disk resource pool and the RAID group can be recorded by using a specified file, or by using a data table in the database.
  • mapping relationship between the hot spare disk resource pool and the RAID group can be added to the hot spare disk resource pool information shown in Table 3. As shown in Table 4, the hot spare disk resource pool 1 matches the RAID 5.
  • the RAID controller When the RAID controller receives the information about the failed hard disk, the RAID controller can quickly determine the hot spare disk resource pool that matches the RAID group where the failed hard disk is located according to the information of the failed hard disk (the type and capacity of the failed hard disk) and the mapping relationship. Select the free hard disk as the hot spare disk to complete the data recovery process.
  • the information about the failed hard disk includes the type and capacity of the failed hard disk.
  • mapping relationship between the hot spare disk resource pool and the RAID group is stored in the memory of the service node; when the RAID controller is implemented by the RAID controller in the RAID card The mapping relationship between the hot spare disk resource pool and the RAID group is stored in the memory of the RAID card.
  • FIG. 2 is an example in which a storage node and a service node are taken as an example.
  • each storage node is stored.
  • the controller obtains the information about the idle hard disk of the storage node where it is located, and sends the information of the idle hard disk to the RAID controller of the service node.
  • the RAID controller creates heat according to the obtained information about the idle hard disk of each storage node. Spare resource pool.
  • the number of storage nodes can be adjusted according to specific service requirements, that is, the number of idle disks can be expanded infinitely according to service requirements, thereby solving the problem that the number of hot spare disks in the prior art is limited.
  • the RAID controller in each service node can obtain the information of the idle hard disk in the storage resource pool determined by the storage controller, create a hot spare disk resource pool according to the information of the idle hard disk, and create a RAID group.
  • the RAID controller can select a free hard disk in the hot spare disk resource pool in the matching hot spare disk resource pool. The hard disk performs data recovery.
  • the present invention forms a hot spare disk resource pool by using the idle hard disk of the storage node across the network, and the storage node can be expanded indefinitely, correspondingly, hot
  • the idle hard disk in the spare disk resource pool can also be expanded correspondingly, which solves the problem that the number of hot spare disks in the prior art is limited, and improves the reliability of the entire system.
  • the RAID controller of the service node can use the local hard disk of the service node for the data disk or the parity disk of the RAID group. Hard disk utilization.
  • the method includes:
  • the RAID controller acquires information about the faulty hard disk of any one of the service nodes where the RAID controller is located.
  • the RAID controller can learn all the RAID groups in the service node through the metadata information, and can monitor the hard disks of each RAID group in the service node where the RAID controller is located. When a hard disk failure occurs, the RAID controller The capacity and type of the failed hard disk can be determined based on the information of the failed hard disk.
  • the RAID controller selects an idle hard disk in the hot spare disk resource pool that matches the RAID group to recover data of the failed hard disk.
  • the RAID controller selects a hot spare disk resource pool that matches the RAID group where the failed hard disk is located according to the hot spare disk resource pool information.
  • the capacity of the hard disk in the hot spare disk resource pool is greater than or equal to the capacity of the failed hard disk.
  • the type of the hard disk in the hot spare disk resource pool is the same as the type of the failed hard disk.
  • the process of selecting a hot spare disk resource pool and a hot spare disk by the RAID controller is as shown in FIG. 3A, and the method includes:
  • the RAID controller determines whether the current hard disk failure is the first hard disk failure in the RAID group.
  • the metadata information of the RAID controller further includes a member hard disk and fault processing information of each RAID group, where the fault processing information includes an identifier, a capacity, and a type of the faulty hard disk, and a hot standby used to recover the faulty hard disk. Disk information.
  • the hot spare disk information includes the capacity and type of the hot spare disk, the area where the hot spare disk is located, and the hot spare disk resource pool to which it belongs.
  • the RAID controller may determine the first hot spare resource pool according to any one of the following manners:
  • Manner 1 The RAID controller selects one hot spare disk resource pool as the first hot spare disk resource pool in the hot spare disk resource pool in the one or more hot spare disk resource pools that match the RAID group.
  • Manner 2 The RAID controller randomly selects one hot spare disk resource pool as the first hot spare disk resource pool in one or more hot spare disk resource pools that match the RAID group.
  • the capacity of the idle hard disk in the first hot spare disk resource pool is greater than or equal to the capacity of the failed hard disk, and the type of the idle hard disk in the first hot spare disk resource pool is the same as the type of the failed hard disk.
  • the RAID controller may determine the first idle hard disk as the hot spare disk according to any one of the following manners:
  • Manner 1 The RAID controller selects an idle hard disk as the first idle hard disk in the first hot spare disk resource pool according to the identifier of the hard disk.
  • Manner 2 The RAID controller randomly selects an idle hard disk as the first idle hard disk in the first hot spare disk resource pool.
  • the RAID controller needs to determine whether the remaining free hard disk in the first hot spare disk resource pool belongs to the same fault as the hot spare disk used in the RAID group. If the domain is the same fault domain, step S302d is performed; if it is not the same fault domain, step S302e is performed.
  • the second hot spare disk resource pool is a hot spare disk resource pool, a second hot spare disk resource pool, and a second hot spare disk resource pool in the hot spare disk resource pool that matches the RAID.
  • the method for selecting the first idle hard disk in the hot spare disk resource pool is the same as that in step S302b, and details are not described herein again.
  • the type of the first idle hard disk of the second hot spare disk resource pool is the same as the type of the failed hard disk, and the capacity of the first idle hard disk of the second hot spare disk resource pool is greater than or equal to the capacity of the failed hard disk, and the second hot The first idle hard disk of the spare disk resource pool and the first idle hard disk of the first hot spare disk resource pool belong to different fault domains.
  • the RAID controller selects the second idle hard disk as the second hot spare disk in the first hot spare disk resource pool. Hot spare disk.
  • the RAID controller may create a resource pool according to at least one of capacity, type, and fault domain.
  • the same hot spare resource pool may Different idle hard disks in the same fault domain may also include idle hard disks in different fault domains.
  • RAID The controller can select the idle hard disk of the different fault domain as the hot spare disk in the first hot spare disk resource pool. For example, select the second hot spare disk in the first hot spare disk resource pool as the hot spare disk.
  • the capacity of the second idle hard disk in the spare disk resource pool is greater than or equal to the capacity of the faulty hard disk, and the second idle hard disk of the first hot spare disk resource pool is the same as the faulty hard disk, and the first hot spare disk resource pool is the first.
  • the idle hard disk and the second free hard disk belong to different fault domains.
  • the second idle hard disk of the first hot spare resource pool is selected in the same manner as step S302b, when the remaining hot spare disks in the first hot spare disk resource pool are not in the same fault domain as the hot spare disks in the RAID group. I will not repeat them here.
  • the RAID controller may also be in other heat matching the RAID group.
  • the method of selecting the hot spare disk as the hot spare disk in the spare disk resource pool is the same as that of step S302b, and is not described here.
  • the RAID controller can also select a hot spare disk according to the capacity, type, and fault domain of the idle hard disk to avoid multiple occurrences in the same RAID group.
  • the hot spare disk belongs to the same fault domain, the data loss caused by the failure of the two hot spare disks fails again, which improves the reliability of the application.
  • the method further includes:
  • the RAID controller sends a first request message to the storage controller.
  • the RAID controller of each service node creates a hot spare disk resource pool and establishes a mapping between the RAID group and the hot spare disk resource pool of the corresponding service node. Relationships: The idle disks included in the hot spare disk resource pool created by the RAID controllers of different service nodes may be the same. When the RAID controller of any service node selects an idle hard disk as the hot spare disk, the selected idle disk is avoided. The hard disk is used by another RAID controller, and needs to send a first request message to the storage controller of the storage node where the selected idle hard disk is located. The first request message is used to determine that the selected idle hard disk is in an unused state.
  • the storage controller where the idle hard disk selected by the RAID controller is located determines that the state of the idle hard disk is “not used”
  • the response result of the storage controller sending the first request message to the RAID controller indicates the idle hard disk. The status is not used.
  • the RAID controller mounts the first idle hard disk to a local directory of the service node where the RAID controller is located, for example, executing a mount command (such as mount storage) in the Linux system.
  • Node IP Idle disk drive letter
  • the RAID controller After the RAID controller mounts the selected idle hard disk to the local area, it updates the fault information of the metadata information of the locally stored record RAID group relationship, and mainly updates the fault processing information used to recover the faulty hard disk.
  • Hot spare disk information where the hot spare disk information includes the capacity and type of the hot spare disk, the area where the hot spare disk is located, and the hot spare disk resource pool to which it belongs.
  • the RAID controller rewrites the data of the failed hard disk into the hot spare disk according to the data in the other non-faulty data disks in the metadata information and the data in the verification disk, thereby completing the data recovery processing of the failed hard disk.
  • the RAID controller of any service node in the fault processing system receives the information of the faulty hard disk of any one of the service nodes, the RAID controller can match the RAID group according to the information of the faulty hard disk.
  • the hot spare disk can be hot spare by the idle disk of the storage node.
  • the number of storage nodes can be increased according to the service requirements.
  • the number of hot spare disks in the hot spare disk resource pool can be continuously expanded.
  • the number of hot spare disks is not limited compared with the prior art. There is a problem in the technology that the hot spare disk is limited.
  • the RAID controller can select the idle hard disk according to the capacity, type, and fault domain of the idle hard disk, and avoid using the idle hard disk of the same fault domain for data recovery in the same RAID group. Data loss caused by spare disk failures to improve the reliability of business applications and the entire system.
  • a method for a fault handling system according to an embodiment of the present invention is described in detail above with reference to FIG. 1 to FIG. 3B.
  • a device for fault processing according to an embodiment of the present invention will be described with reference to FIG. 4 to FIG. And equipment.
  • the device 400 includes Taking unit 401, processing unit 402;
  • the obtaining unit 401 is configured to obtain information about a faulty hard disk in a RAID group, where the information of the faulty hard disk includes a capacity and a type of the faulty hard disk;
  • the processing unit 402 is configured to select an idle hard disk to recover data of the faulty hard disk in a hot spare disk resource pool that is matched with the RAID group, where the hot spare disk resource pool is pre-created by the RAID controller.
  • the hot spare disk resource pool includes one or more idle hard disks in the at least one storage node, and the capacity of the idle hard disk selected by the RAID controller is greater than or equal to the capacity of the faulty hard disk, and the The type of free hard disk selected by the RAID controller is the same as the type of the failed hard disk.
  • the device 400 of the embodiment of the present invention may be implemented by an Application Specific Integrated Circuit (ASIC) or a Programmable Logic Device (PLD), and the PLD may be a complex program logic device ( Complex Programmable Logic Device (CPLD), Field-Programmable Gate Array (FPGA), Generic Array Logic (GAL), or any combination thereof.
  • ASIC Application Specific Integrated Circuit
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • GAL Generic Array Logic
  • the obtaining unit 401 is further configured to acquire information about the idle hard disk sent by the storage controller, where the information of the idle hard disk includes the type and capacity of the idle hard disk.
  • the processing unit 402 is further configured to create at least one hot spare disk resource pool, where each hot spare disk resource pool includes at least one idle hard disk having at least one storage node of the same capacity and the same type;
  • the processing unit 402 is further configured to: when the RAID group is created, determine one or more hot spare disk resource pools that match the RAID group according to the type and capacity of the hard disks in the RAID group, and record the A mapping relationship between a RAID group and one or more hot spare disk resource pools that match the RAID group;
  • the information of the idle hard disk further includes information about a fault domain of the idle hard disk, where the idle hard disk selected by the processing unit 402 is not in the same fault domain as the hot spare disk used in the RAID group.
  • the information of the fault domain is used to identify the relationship between different hard disks. When different hard disks in the same fault domain fail at the same time, data loss occurs. When different hard disks in different fault domains fail at the same time, data loss will not occur.
  • the state of the idle hard disk selected by the processing unit is unused.
  • the obtaining unit 401 is further configured to receive a response result of the first request message indicating that the state of the idle hard disk selected by the controller is unused;
  • the processing unit 402 is further configured to mount the selected idle hard disk to the local area, and perform fault data recovery processing of the RAID group.
  • the processing unit selects the idle hard disk as the hot spare disk to recover data of the faulty hard disk, specifically:
  • the apparatus 400 may correspond to performing the method described in the embodiments of the present invention, and the above and other operations and/or functions of the respective units in the apparatus 400 are respectively implemented to implement the respective methods in FIGS. 2 to 3B.
  • the corresponding process for the sake of brevity, will not be described here.
  • an apparatus 400 provided by the present invention provides a hot spare disk implementation manner of a cross-node, which uses a free hard disk of a storage node to create a hot spare disk resource pool, and establishes a hot spare disk resource pool and a RAID group.
  • a hot spare disk implementation manner of a cross-node which uses a free hard disk of a storage node to create a hot spare disk resource pool, and establishes a hot spare disk resource pool and a RAID group.
  • the mapping relationship when any RAID group fails, you can select one free hard disk as the hot spare disk in the hot spare disk resource pool that matches the RAID group where the faulty disk resides. The storage data is restored.
  • the number of the available hard disks in the storage node can be expanded according to the service requirements.
  • the number of the hot spare disk resource pools is not limited, and the number of the local hard disks using the service nodes in the prior art is limited.
  • all local hard disks of the service node can be used for data disks or parity disks of the RAID group, which improves the utilization of the local hard disk.
  • FIG. 5 is a schematic diagram of a device 500 for fault processing according to an embodiment of the present invention.
  • the device 500 includes a processor 501, a memory 502, a communication interface 503, and a bus 504.
  • the processor 501, the memory 502, and the communication interface 503 communicate via the bus 504, and may also implement communication by other means such as wireless transmission.
  • the memory 502 is for storing instructions for executing the instructions stored by the memory 502.
  • the memory 502 stores program code, and the processor 501 can call the program code stored in the memory 502 to perform the following operations:
  • an idle hard disk to restore the data of the failed hard disk in the hot spare disk resource pool that is matched with the RAID group, where the hot spare disk resource pool is pre-created by the device 500, and the hot spare disk resource pool is in the pool.
  • the hot spare disk resource pool is pre-created by the device 500, and the hot spare disk resource pool is in the pool.
  • the hot spare disk resource pool is pre-created by the device 500, and the hot spare disk resource pool is in the pool.
  • the capacity of the idle hard disk selected by the device 500 is greater than or equal to the capacity of the faulty hard disk
  • the type of the idle hard disk selected by the device 500 is The types of the failed hard disks are the same.
  • the processor 501 may be a CPU, and the processor 501 may also be other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), and off-the-shelf programmable gate arrays. (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 502 can include read only memory and random access memory and provides instructions and data to the processor 501.
  • a portion of the memory 502 can also include a non-volatile random access memory.
  • the memory 502 can also store information of the device type.
  • the bus 504 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus 504 in the figure.
  • an idle hard disk to restore the data of the failed hard disk in the hot spare disk resource pool that is matched with the RAID group, where the hot spare disk resource pool is pre-created by the device 500, and the hot spare disk resource pool is in the pool.
  • the hot spare disk resource pool is pre-created by the device 500, and the hot spare disk resource pool is in the pool.
  • the hot spare disk resource pool is pre-created by the device 500, and the hot spare disk resource pool is in the pool.
  • the capacity of the idle hard disk selected by the device 600 is greater than or equal to the capacity of the faulty hard disk
  • the type of the idle hard disk selected by the device 600 is The types of the failed hard disks are the same.
  • the device 500 and the device 600 provided by the present application implement a hot spare disk resource pool by using an idle hard disk of a storage node across the network, and establish a mapping relationship between the hot spare disk resource pool and each RAID group. If the faulty disk is faulty, you can select one of the hot spare disk pools in the hot spare disk resource pool to be used as the hot spare disk for fault data recovery and hot spare disk resource pool.
  • the number of idle disks in the storage system can be adjusted according to the service requirements. This solves the problem of system reliability caused by the limited number of disks in the hot spare disk resource pool in the prior art.
  • all local hard disks of the service node can be used for data disks and parity disks of the RAID group, which improves the utilization of the local hard disk.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Human Computer Interaction (AREA)
  • Debugging And Monitoring (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Hardware Redundancy (AREA)

Abstract

一种故障处理的方法、装置和设备,该方法包括:独立硬盘冗余阵列RAID控制器接收任一RAID组中故障硬盘的信息,故障硬盘的信息包括故障硬盘的容量和类型;在与该RAID组匹配的热备盘资源池中选择空闲硬盘对故障硬盘的数据进行恢复,热备盘资源池中空闲硬盘的容量大于或等于故障硬盘的容量,且热备盘资源池的空闲硬盘的类型与故障硬盘的类型相同,热备盘资源池是RAID控制器预先创建,热备盘资源池中包括至少一个存储节点中的一个或多个空闲硬盘。该方法通过跨节点的热备盘资源池方案解决现有技术中本地热备盘数量限制所导致的热备失效问题,以此提高整个系统的可靠性。

Description

一种故障处理的方法、装置和设备 技术领域
本发明涉及存储领域,尤其涉及一种故障处理的方法、装置和设备。
背景技术
独立硬盘冗余阵列(Redundant Array of Independent Disks,RAID)是一种将多个独立的硬盘按照不同配置策略组合起来形成一个硬盘组,该硬盘组也称为RAID组,以此提供比单个硬盘更高的存储性能和提供数据备份技术。RAID因具有速度快、安全性高两项优点,在存储领域得到越来越广泛的应用。
现有技术中,RAID组的管理通常由RAID控制器实现,RAID组的配置策略主要划分为RAID0、RAID1、RAID2、RAID3、RAID4、RAID5、RAID6、RAID7、RAID10、RAID50,其中,RAID3以上的配置策略中需要配置为N+M模式,N和M都是大于1的正整数,N表示数据盘的个数,M表示校验盘的个数。同时,RAID组中还配置了热备盘,当RAID组中出现硬盘故障时,RAID控制器可以根据校验盘中的校验数据和数据盘中的数据将故障硬盘上的数据恢复到热备盘上,以此提高系统可靠性。
通常采用服务器的本地硬盘作为热备盘,热备盘在正常情况下不存放数据,当RAID组中其他正在使用的物理硬盘有损坏时,热备盘会自动接管损坏硬盘的存储功能,用于承载损坏硬盘中的数据,保证数据存取不中断。但是,创建RAID组时需要预先指定服务器的本地硬盘作为热备盘,而且在同一服务器中RAID控制器可以同时创建多个RAID组,每个RAID组均需要配置各自的热备盘,由此导致同一存储设备中热备盘数量受限的问题,影响系统可靠性。
发明内容
本发明实施例提供了一种故障处理的方法、装置和设备,可以解决现有技术中同一存储设备的热备盘数量受限的问题,以此提高存储系统的可靠性。
第一方面,提供一种故障处理的方法,该方法应用于故障处理系统中,该系统中包括至少一个业务节点和至少一个存储节点,存储节点和业务节点之间通过网络进行通信,每个存储节点包括至少一个空闲硬盘,每个业务节点包括独立硬盘冗余阵列(RedundantArray of Independent Disks,RAID)控制器与RAID组,RAID控制器按照不同配置策略将多个硬盘组成硬盘组,该硬盘组也可以称为RAID组,并对RAID组进行监控管理。RAID控制器获取该RAID控制器所在业务节点中任一RAID组的故障硬盘的信息时,该故障硬盘的信息中包括故障硬盘的容量和类型,RAID控制器在与该RAID组匹配的热备盘资源池中选择空闲硬盘作为热备盘对故障硬盘的数据进行恢复,其中,热备盘资源池是RAID控制器预先创建,热备盘资源池中包括至少一个存储节点的一个或多个空闲硬盘;RAID控制器锁选择的空闲硬盘的容量大于或等于故障硬盘的容量,且该空闲硬盘的类型与故障硬盘的类型相同。
可选地,热备盘资源池可以由逻辑硬盘和物理硬盘中的至少一种组成。
具体地,存储节点中也可以包括RAID控制器,该RAID控制器利用存储节点中多个硬盘组成RAID组,并将该RAID组划分为多个逻辑硬盘,将未被使用的逻辑硬盘信息发送给业务节点的RAID控制器,其中,逻辑硬盘信息包括逻辑硬盘的容量、类型、 逻辑硬盘标识、逻辑硬盘所归属的RAID组等信息。
RAID控制器可以按照以下方式中的任意一种确定第一热备盘资源池:
方式一:RAID控制器在与RAID组匹配的一个或多个热备盘资源池中,按照热备盘资源池的标识依次选择一个热备盘资源池作为第一热备盘资源池。
方式二:RAID控制器在与RAID组匹配的一个或多个热备盘资源池中随机选择一个热备盘资源池作为第一热备盘资源池。
其中,第一热备盘资源池中空闲硬盘的容量大于或等于故障硬盘的容量,且第一热备盘资源池中空闲硬盘的类型与故障硬盘的类型相同。
进一步地,在确定第一热备盘资源池后,RAID控制器可以按照如下方式中的任意一种确定第一空闲硬盘作为热备盘:
方式一:RAID控制器在第一热备盘资源池中按照硬盘的标识选择一个空闲硬盘作为第一空闲硬盘。
方式二:RAID控制器在第一热备盘资源池中随机选择一个空闲硬盘作为第一空闲硬盘。
在一种可能的实现方式中,存储节点还包括存储控制器,RAID控制器先获取存储控制器发送的空闲硬盘的信息,空闲硬盘的信息包括空闲硬盘的类型和容量,则RAID控制器按照空闲硬盘的信息创建至少一个热备盘资源池,每个热备盘资源池包括具有相同容量和/或相同类型的至少一个空闲硬盘;当RAID控制器创建RAID组时,根据RAID组中硬盘的类型和容量确定与RAID组匹配的一个或多个热备盘资源池,并记录该RAID组与该RAID组匹配的一个或多个热备盘资源池的映射关系,则当RAID控制器获取任一RAID组的故障硬盘的信息时,可以根据映射关系和故障硬盘的信息在与该RAID组匹配的热备盘资源池中选择一个热备盘资源池的空闲硬盘对故障硬盘进行数据恢复。
在一种可能的实现方式中,空闲硬盘的信息还包括硬盘的故障域的信息,所述RAID控制器所选择的空闲硬盘与所述RAID组中已使用的热备盘不在同一故障域,所述故障域的信息用于标识不同硬盘所在区域的关系,同一故障域内的不同硬盘同时故障时会导致数据丢失,不同故障域内的不同硬盘同时故障时不会导致数据丢失。
具体地,空闲硬盘的信息还包括硬盘的故障域的信息,该故障域用于标识不同硬盘所在的区域的关系,该区域可以是根据硬盘所在存储节点的物理位置划分的不同区域,物理位置可以是存储节点所在的机架、机柜、机框中的至少一种,当两个不同区域的存储节点或存储节点的部件同时发生故障时,不会导致数据丢失,则称这个两个区域中的硬盘属于不同故障域;当两个不同区域的存储节点或存储节点的部件同时发生故障时,会导致数据丢失,则称这两个区域中的硬盘属于同一故障域。
可选地,硬盘所在的区域也可以是逻辑区域。具体地,将硬盘所在存储节点按照预置策略划分成不同逻辑区域,以便于不同逻辑区域的存储节点或存储节点的部件(如网卡、硬盘等)故障时不影响应用程序正常运行,同一逻辑区域的存储节点或存储节点的部件故障会影响业务应用,其中,预置策略可以为根据业务需求将存储节点划分为不同逻辑区域。例如,将同一存储节点内的硬盘划分为一个逻辑区域,不同逻辑节点间的硬盘划分为不同逻辑区域,那么,当单个存储节点整体故障或存储节点的部件故障时,不影响其他存储节点的正常运行。
在一种可能的实现方式中,在RAID控制器在与RAID组匹配的热备盘资源池中选 择空闲硬盘之后,RAID控制器需要与该空闲硬盘所对应的存储控制器确定该空闲硬盘的状态为未使用,才能启动故障硬盘的数据恢复过程,具体确认状态的过程如下:RAID控制器向存储控制器发送第一请求消息,第一请求消息用于确定所选择的空闲硬盘的状态;当接收用于指示RAID控制器所选择的空闲硬盘的状态为未使用的第一请求消息的响应结果时,RAID控制器将所选择的空闲硬盘挂载到本地,并执行所述RAID组的故障数据恢复处理。
在一种可能的实现方式中,RAID控制器根据RAID组中非故障的数据盘和校验盘中的数据,将故障硬盘数据重新写入所述RAID控制器所选择的热备盘,以此对故障硬盘的数据进行恢复。
通过上述内容的描述,本发明所提供的一种故障处理方法,业务节点的RAID控制器利用存储节点的空闲硬盘组成热备盘资源池,并建立RAID组和热备盘资源池的映射关系,当RAID组中出现故障硬盘时,即从与其匹配的热备盘资源池中选择热备盘完成故障硬盘的数据恢复,其中,存储节点的数量可以根据业务需求不断增加,以此保证热备盘资源池中硬盘的数量可以无限扩容,解决现有技术中热备盘数量受限的问题,提高系统的可靠性。另一方面,业务节点的本地硬盘均可以用于组建RAID组,提高本地硬盘使用率。
第二方面,本发明提供一种故障处理的装置,所述装置包括用于执行第一方面或第一方面任一种可能实现方式中的故障处理方法的各个模块。
第三方面,本发明提供一种故障处理的设备,所述设备包括处理器、存储器、通信接口、总线,所述处理器、存储器和通信接口之间通过总线连接并完成相互通信,所述处理器中用于存储计算机执行指令,所述设备运行时,所述处理器执行所述存储器中的计算机指令以利用所述设备中的硬件资源执行第一方面或第一方面的任意可能的实现方式中的所述的方法。
第四方面,本发明提供一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
第五方面,本发明提供一种故障处理的设备,所述设备包括RAID卡、存储器、通信接口、总线,所述RAID卡中包括RAID控制器和存储器,所述RAID控制器和RAID卡的存储器通过总线相通信,所述RAID卡、存储器、通信接口通过总线相互通信,所述RAID卡的存储器中用于存储计算机执行指令,所述设备运行时,所述RAID控制器执行所述RAID卡的存储器中的计算机执行指令以利用所述设备中的硬件资源执行第一方面或第一方面的任意可能的实现方式中的所述的方法。
第六方面,提供了一种计算机可读介质,用于存储计算机程序,该计算机程序包括用于执行第一方面或第一方面的任意可能的实现方式中的方法的指令。
综上所述,通过本申请提供的数据处理方法、装置和设备,利用跨网络的存储节点的空闲硬盘实现热备盘资源池,并建立热备盘资源池与每个RAID组之间的映射关系,当任一RAID组出现故障硬盘时,可以在与该RAID组匹配的热备盘资源池中选择一个热备盘资源池中的一个空闲硬盘作为热备盘进行故障数据恢复,热备盘资源池中空闲硬盘的数量可以根据业务需求对存储节点中空闲硬盘的数量进行调整,以此解决现有技术中热备盘资源池中硬盘数量受限所导致的影响系统可靠性的问题。另一方面,业务节点的所有本地硬盘均可以用于RAID组的数据盘和校验盘,提高了本地硬盘的利用率。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。
图1为本发明实施例提供的一种故障处理系统的逻辑框图;
图2为本发明实施例提供的一种故障处理的方法的流程示意图;
图3为本发明实施例提供的另一种故障处理的方法的流程示意图;
图3A为本发明实施例提供的另一种故障处理的方法的流程示意图;
图3B为本发明实施例提供的另一种故障处理的方法的流程示意图;
图4为本发明实施例提供的一种故障处理的装置示意图;
图5为本发明实施例提供的一种故障处理的设备示意图;
图6为本发明实施例提供的另一种故障处理的设备示意图。
具体实施方式
下面将结合附图,对本发明实施例中的技术方案进行清楚、完整地描述。
图1为本发明实施例所提供的一种故障处理系统的示意图,如图所示,在该系统中包括至少一个业务节点和至少一个存储节点,业务节点和存储节点之间通过网络通信。
可选地,业务节点和存储节点之间可以通过以太网进行通信,也可以通过支持远程直接数据存取(Remote Direct Memory Access,RDMA)的无损以太网数据中心桥接(DataCenter Bridging,DCB)和无线带宽(InfiniBand,IB)进行通信。
可选地,RAID控制器与热备盘资源池之间通过标准的网络存储协议进行数据交互,例如存储协议可以是基于网络的非易失性存储标准(Non-Volatile Memory Express OverFabric,NoF)协议,也可以是用于将小型计算机系统接口(Internet SmallComputerSystem Interface,iSCSI)协议的命令和数据通过RDMA的方式传输的iSER(iSCSIExtensions for RDMA,iSER)协议,或用于将SCSI协议的命令和数据通过RDMA的方式传输的小型计算机系统接口RDMA协议(Small Computer System Interface RDMAProtocol,SRP)。
业务节点可以为一个服务器,用于对用户的应用程序提供计算资源(如CPU和内存)、网络资源(如网卡)和存储资源(如硬盘)。每个业务节点中包括一个RAID控制器,RAID控制器可以将多个本地硬盘按照不同的配置策略组成一个或多个硬盘组,配置策略主要划分为RAID0、RAID1、RAID2、RAID3、RAID4、RAID5、RAID6、RAID7、RAID10、RAID50,其中,RAID3以上的配置策略中需要配置为N+M模式,N和M为大于1的正整数,N表示在该RAID组的成员硬盘中存储数据的数据盘的个数,M表示在该RAID组的成员硬盘中存储校验码的校验盘的个数。例如利用业务节点内的5个硬盘按照RAID5的配置策略创建RAID组。其中,本地硬盘是指与RAID控制器在同一服务器内的硬盘,如图1所示的硬盘11、…、硬盘1n可以称为业务节点1的本地硬盘。RAID控制器会将每个RAID组中成员硬盘信息记录到元数据信息中,元数据信息中包括每个RAID组的配置策略、成员硬盘的容量、类型等,且RAID控制器可以根据元数据信息对每个RAID组进行监控。
值得说明的是,RAID控制器可以由专用RAID卡实现,也可以由业务节点的处理器实现。当由RAID卡实现RAID控制器功能时,元数据信息存储在RAID卡的存储器中,当 由业务节点的处理器实现RAID控制器功能时,元数据信息存储在业务节点的存储器中。该存储器可以是U盘、移动硬盘、只读存储器(ROM,Read-On ly Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。处理器可以为是CPU,该处理器还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
还值得说明的是,业务节点的硬盘可以划分为固态硬盘(Solid State Disk,SSD)和机械硬盘(Hard Disk Drive,HDD)两大类,其中HDD按照数据接口不同又可以进一步细分为以下几种类型:高级技术附加装置(Advanced Technology Attachment,ATA)硬盘、小型机系统接口(Small Computer System Interface,SCSI)硬盘、SAS(SerialAttached SCSI,SAS)硬盘、SATA(Serial ATA,SATA)硬盘。每种类型的硬盘的接口、尺寸、硬盘读写速率等属性各有不同。
存储节点可以为服务器或存储阵列,存储节点用于为用户的应用程序提供存储资源。在本申请中,存储节点还用于为业务节点的RAID组提供热备盘资源池,每个存储节点中包括存储控制器和至少一个硬盘,与业务节点相同,存储节点的硬盘类型也可以划分为SSD、ATA、SCSI、SAS和SATA几类。在故障处理系统中可以指定存储节点仅用于提供热备盘资源池的空闲硬盘,即被指定的存储节点中的所有硬盘均可用于提供热备盘资源池中的空闲硬盘。
可选地,同一存储节点的硬盘除了用于提供热备盘资源池的空闲硬盘外,还可以用于为指定应用程序提供存储资源,如存储节点的部分硬盘还用于作为存储ORACLE数据库的存储设备,此时,每个存储控制器可以收集其所在存储节点的空闲硬盘的信息,由业务节点的RAID控制器收集各个存储节点的空闲硬盘的信息,并将空闲硬盘组成热备盘资源池。
示例地,如图1所示,存储节点11中包括硬盘111、硬盘112、…、硬盘11 n,存储节点12中包括硬盘121、硬盘122、…、硬盘12n,存储节点1N中包括硬盘1N1、硬盘1N2、…、硬盘1Nn,其中,N和n均为大于1的正整数。假设存储节点11为指定专门用于提供热备盘资源池中空闲硬盘的存储节点,而其他存储节点的硬盘则不仅用于为指定应用程序提供存储资源,同时还用于提供热备盘资源池中的空闲硬盘。具体地,存储节点12中空闲硬盘为硬盘121和硬盘122,存储节点13中空闲硬盘为硬盘1Nn。此时,故障处理系统中任一业务节点的RAID控制器均可以通过网络获取每个存储节点中空闲硬盘的信息,其中,空闲硬盘包括存储节点11的硬盘111、硬盘112、…、硬盘11n;及存储节点12的硬盘121和硬盘122;存储节点13的硬盘1Nn。空闲硬盘的信息包括每个硬盘的容量和类型,如硬盘111的类型为SAS盘,容量为300G。
可选地,热备盘资源池也可以由逻辑硬盘组成。具体地,存储节点中也可以包括RAID控制器,该RAID控制器利用存储节点中多个硬盘组成RAID组,并将该RAID组划分为多个逻辑硬盘,将未被使用的逻辑硬盘的信息发送给业务节点的RAID控制器,其中,逻辑硬盘的信息包括逻辑硬盘的容量、类型、逻辑硬盘标识、逻辑硬盘所归属的RAID组等信息。
可选地,热备盘资源池中也可以同时包括物理硬盘和逻辑硬盘,即部分存储节点提 供的空闲硬盘为物理硬盘,部分存储节点提供的空闲硬盘为逻辑硬盘,业务节点的RAID控制器可以根据类型区分不同类型的硬盘,以便于创建不同的热备盘资源池。
值得说明的是,图1所示故障处理系统仅为一种示例,其中,故障处理系统中不同业务节点的硬盘数量和类型不构成对本发明的限制;不同存储节点的硬盘数量和类型也不构成对本发明的限制。而且,业务节点和存储节点的数量可以相等,也可以不相等。
可选地,图1所示故障处理系统中,空闲硬盘的信息还包括硬盘的故障域的信息,故障域用于标识不同硬盘所在的区域的关系,同一故障域内的不同硬盘同时故障时会导致数据丢失,不同故障域内的不同硬盘同时故障时不会导致数据丢失。该区域可以是物理区域,即根据硬盘所在存储节点的物理位置划分的不同区域,物理位置可以是存储节点所在的机架、机柜、机框中的至少一种,当两个不同区域的存储节点或存储节点的部件同时发生故障时,不会导致数据丢失,则称这个两个区域中的硬盘属于不同故障域;当两个不同区域的存储节点或存储节点的部件同时发生故障时,会导致数据丢失,则称这两个区域中的硬盘属于同一故障域。
示例地,表1为一种存储节点物理位置标识的示例,如表所示,若同一机柜的存储节点共用一套电源设备,当电源设备故障时,同一机柜的所有存储节点均发生故障,那么物理位置在同一机柜的不同存储节点的硬盘属于同一故障域,不在同一机柜的不同存储节点的硬盘属于不同故障域,则存储节点1和存储节点2位于同一个机架的同一个机柜的不同机框中,那么存储节点1和存储节点2的硬盘属于同一个故障域中,即当电源设备故障时,存储节点1和存储节点2的内的节点均无法正常工作,运行在存储节点1和存储节点2上的应用程序会受到影响,那么,存储节点1和存储节点2的硬盘属于同一故障域;而存储节点1和存储节点3分别位于同一机架的不同机柜和机框中,当机架1中的机柜1电源设备故障时,存储节点1无法正常工作,存储节点3无影响,那么,存储节点1和存储节点3的硬盘属于不同故障域。
表1
  机架 机柜 机框
存储节点1 1 1 1
存储节点2 1 1 2
存储节点3 1 2 1
可选地,图1所示故障处理系统中,硬盘所在的区域也可以是逻辑区域。具体地,将硬盘所在存储节点按照预置策略划分成不同逻辑区域,以便于不同逻辑区域的存储节点或存储节点的部件(如网卡、硬盘等)故障时不影响应用程序正常运行,同一逻辑区域的存储节点或存储节点的部件故障会影响业务应用,其中,预置策略可以为根据业务需求将存储节点划分为不同逻辑区域。例如,将同一存储节点内的硬盘划分为一个逻辑区域,不同逻辑节点的硬盘划分为不同逻辑区域,那么,当单个存储节点整体故障或存储节点的部件故障时,不影响其他存储节点的正常运行。
接下来,结合上面的描述,具体介绍在图1所示的故障处理系统中热备盘资源池的创建方法。每个业务节点中RAID组由各自的RAID控制器管理,因此,每个业务节点的RAID控制器均会预先创建热备盘资源池。为简洁清晰的描述本发明所提供的故障处理方法,以故障处理系统中包括一个业务节点和一个专门用于提供空闲硬盘的存储节点为例, 结合图2进一步解释本发明实施例提供的一种故障处理的方法,如图所示,所述方法包括:
S201、存储控制器获取存储节点中空闲硬盘的信息。
具体地,空闲硬盘的信息中包括存储控制器所在存储节点的空闲硬盘的类型和容量。其中,空闲硬盘的类型用于标识该硬盘的种类,如SAS、SATA等,当空闲硬盘同时包括逻辑硬盘和物理硬盘时,硬盘的种类还可以进一步区分为逻辑硬盘和物理硬盘;容量用于标识该硬盘的大小,如300G、600G。
可选地,空闲硬盘的信息还包括该硬盘的故障域的信息。一个故障域中包括一个或多个硬盘。当同一故障域中不同硬盘同时故障时,会导致业务应用中断或数据丢失;当不同故障域中不同硬盘同时故障时,对业务无影响。
可选地,每个存储节点的存储控制器可以利用指定文件记录其所在存储节点的空闲硬盘的信息,也可以利用数据库中的数据表记录存储控制器所在存储节点的空闲硬盘的信息。进一步地,存储控制器可以周期性查询其所在存储节点空闲硬盘的信息,并更新其保存的内容。
S202、RAID控制器获取空闲硬盘的信息。
具体地,业务节点的RAID控制器向存储控制器发送获取空闲硬盘的信息的请求消息,存储控制器向RAID控制器发送本存储节点的空闲硬盘的信息。
S203、RAID控制器根据空闲硬盘的信息创建至少一个热备盘资源池。
具体地,RAID控制器可以根据空闲硬盘的信息中空闲硬盘的类型和/或容量创建一个或多个热备盘资源池,如,RAID控制器可以按照空闲硬盘的类型创建热备盘资源池、或按照空闲硬盘的容量创建热备盘资源池、或按照空闲硬盘的类型和容量创建热备盘资源池,并记录热备盘资源池信息。
示例地,假设故障处理系统中存储节点1中空闲硬盘包括硬盘111和硬盘112,每个硬盘均为300G SAS盘;存储节点2中空闲硬盘包括硬盘121和硬盘122,每个硬盘均为600G SAS盘;存储节点3中空闲硬盘包括硬盘131和硬盘132,每个硬盘均为500G SATA盘。若按照硬盘的类型创建热备盘资源池,则RAID控制器可以按照空闲硬盘的类型创建2个热备盘资源池:热备盘资源池1包括硬盘包括硬盘111、硬盘112、硬盘121和硬盘122;热备盘资源池2包括硬盘131和132,其中,每个热备盘资源池中不同空闲硬盘的类型相同。可选地,RAID控制器也可以按照硬盘的容量创建热备盘资源池,则RAID控制器可以创建3个热备盘资源池:热备盘资源池1包括硬盘包括硬盘111、硬盘112;热备盘资源池2包括硬盘121和硬盘122;热备盘资源池3包括硬盘131和132,其中,每个热备盘资源池中不同空闲硬盘的容量相同。可选地,RAID控制器也可以按照硬盘的类型和容量创建3个热备盘资源池:热备盘资源池1包括硬盘111和硬盘112;热备盘资源池2包括硬盘121和硬盘122;热备盘资源池3包括硬盘131和硬盘132,其中,每个热备盘资源池中不同空闲硬盘的容量和类型均相同。
可选地,当存储节点所提供的空闲硬盘包括物理硬盘和逻辑硬盘时,即硬盘的类型还包括物理硬盘和逻辑硬盘,RAID控制器创建热备盘资源池时,可以先按照物理硬盘和逻辑硬盘对空闲硬盘进行分类,然后再按照硬盘的容量进一步细分,进而组成不同热备盘资源池。
可选地,当空闲硬盘的信息中还包括硬盘的故障域的信息时,RAID控制器还可以按 照硬盘的容量、类型和故障域三个因素创建一个或多个热备盘资源池。每个热备盘资源池中空闲硬盘的容量和类型相同,且属于同一故障域;或者,每个热备盘资源池中空闲硬盘的容量和类型相同,且属于不同故障域。
示例地,若按照硬盘的类型、容量和故障域三者创建热备盘资源池,且存储节点1中空闲硬盘的信息如表2所示,将具有相同容量和类型,且在同一个故障域的硬盘创建为一个热备盘资源池,那么如表2所示的空闲硬盘的信息,RAID控制器可以创建3个热备盘资源池:热备盘资源池1包括硬盘11、硬盘12、硬盘21;热备盘资源池2包括硬盘31、硬盘32;热备盘资源池3包括硬盘43、硬盘45。可选地,将具有相同容量和类型,且在不同故障域的硬盘创建为一个热备盘资源池,那么如表2所示的空闲硬盘的信息,RAID控制器可以创建3个热备盘资源池:热备盘资源池1包括硬盘11、硬盘31、硬盘43;热备盘资源池2包括硬盘12、硬盘32、硬盘45;热备盘资源池3包括硬盘21,其中,每个热备盘资源池中空闲硬盘的容量和类型相同,且硬盘的故障域不同。
表2
Figure PCTCN2017112358-appb-000001
在RAID控制器创建热备盘资源池后,会利用指定文件或数据库记录该热备盘资源池信息,该热备盘资源池信息中包括热备盘标识、硬盘类型和容量、硬盘所在存储节点。
可选地,热备盘资源池也可以包括空闲硬盘所在区域信息。
示例地,表3为RAID控制器根据表2所示的空闲硬盘的信息创建的热备盘资源池信息的一种示例,如表所示,RAID控制器记录热备盘资源池信息,其中,包括热备盘资源池标识、空闲硬盘标识、硬盘容量、硬盘类型、硬盘所在存储节点、硬盘所在区域。
表3
Figure PCTCN2017112358-appb-000002
S204、RAID控制器创建RAID组时,根据热备盘资源池中空闲硬盘的信息确定与该RAID组相匹配的至少一个热备盘资源池,并记录与该RAID组匹配的至少一个热备盘资源池的映射关系。
具体地,RAID控制器创建RAID组时,根据热备盘资源池中空闲硬盘的类型和容量确定与RAID组匹配的热备盘资源池,热备盘资源池与RAID组匹配是指热备盘资源池中空闲硬盘的容量大于或等于RAID组中成员硬盘的容量,且热备盘资源池中空闲硬盘的类型与RAID组中成员硬盘的类型相同。其中,热备盘资源池和RAID组的映射关系可以利用指定文件记录,也可以利用数据库中数据表进行记录。
示例地,可以在表3所示的热备盘资源池信息中添加与RAID组的映射关系,具体如表4所示,热备盘资源池1与RAID5相匹配。
表4
Figure PCTCN2017112358-appb-000003
值得说明的是,对于同一个业务节点中存在多个按照相同配置策略组成的RAID组时,例如业务节点1中存在2个RAID5时,还可以对RAID组添加其他标识字段用于区分不同RAID组,如第一RAID5和第二RAID5。
可选地,也可以新建一个如表5所示的映射关系,该映射关系仅用于记录热备盘资源池标识和匹配RAID组的对应关系。
表5
热备盘资源池标识 匹配RAID组
热备盘资源池1 RAID5
当RAID控制器接收到故障硬盘的信息时,RAID控制器可以根据故障硬盘的信息(故障硬盘的类型和容量)和映射关系快速确定与故障硬盘所在RAID组相匹配的热备盘资源池,并选择空闲硬盘作为热备盘完成数据恢复处理,其中,故障硬盘的信息中包括故障硬盘的类型和容量。
值得说明的是,当RAID控制器由业务节点的处理器实现时,热备盘资源池和RAID组的映射关系存储在业务节点的存储器中;当RAID控制器由RAID卡中的RAID控制器实现时,热备盘资源池和RAID组的映射关系存储在RAID卡的存储器中。
还值得说明的是,图2所示方法为以一个存储节点和一个业务节点为例进行的说明,在具体实施过程中,当故障处理系统中包括多个存储节点时,每个存储节点的存储控制器均会获取其所在的存储节点的空闲硬盘的信息,并将空闲硬盘的信息发送给业务节点的RAID控制器,RAID控制器会根据所获取的每个存储节点的空闲硬盘的信息创建热备盘资源池。而且,存储节点的个数可以根据具体业务需求进行调整,即空闲硬盘的数量可以根据业务需求进行无限扩容,以此解决了现有技术中热备盘数量受限的问题。
通过上述内容的描述,每个业务节点中的RAID控制器可以获取存储控制器确定的存储资源池中空闲硬盘的信息,根据该空闲硬盘的信息创建热备盘资源池,在创建RAID组 时,将热备盘资源池与RAID组进行匹配,当RAID组中出现故障硬盘时,RAID控制器即可以在匹配的热备盘资源池中选择一个热备盘资源池中的空闲硬盘对故障硬盘进行数据恢复。与现有技术中利用业务节点的本地硬盘作为热备盘的技术方案相比,本发明通过跨网络的存储节点的空闲硬盘组成热备盘资源池,且存储节点可以无限扩充,相应的,热备盘资源池中空闲硬盘也可以作对应扩充,解决了现有技术中热备盘数量受限的问题,提高了整个系统的可靠性。另一方面,业务节点的RAID控制器在创建RAID组时,可以将业务节点的本地硬盘全部用于RAID组的数据盘或校验盘,不用再预留本地硬盘作为热备盘,提高了本地硬盘利用率。
进一步地,结合图3详细介绍本发明所提供的一种热备盘管理的方法,如图所示,所述方法包括:
S301、RAID控制器获取RAID控制器所在的业务节点中任一RAID组的故障硬盘的信息。
具体地,RAID控制器可以通过元数据信息获知该业务节点中所有RAID组,并可以对该RAID控制器所在的业务节点中每个RAID组的硬盘进行监控,当出现硬盘故障时,RAID控制器可以根据故障硬盘的信息确定故障硬盘的容量和类型。
S302、RAID控制器在与该RAID组匹配的热备盘资源池中选择一个空闲硬盘对所述故障硬盘的数据进行恢复。
具体地,RAID控制器根据其记录的热备盘资源池信息,选择与故障硬盘所在的RAID组匹配的热备盘资源池,该热备盘资源池中硬盘的容量大于或等于故障硬盘的容量,且热备盘资源池的硬盘的类型与故障硬盘的类型相同。
其中,RAID控制器选择热备盘资源池和热备盘的过程如图3A所示,所述方法包括:
S302a、RAID控制器判断本次硬盘故障是否为该RAID组中首次硬盘故障。
具体地,RAID控制器的元数据信息中还包括每个RAID组的成员硬盘和故障处理信息,其中,故障处理信息包括故障硬盘的标识、容量和类型,以及恢复该故障硬盘所使用的热备盘信息,热备盘信息包括热备盘的容量、类型、热备盘所在区域和其所归属的热备盘资源池。当业务节点中任一RAID组出现硬盘故障时,RAID控制器可以根据元数据信息确定本次硬盘故障是否为该RAID组中首次硬盘故障,当元数据信息中无该RAID组的故障处理信息时,表示该RAID组为首次硬盘故障,则执行步骤S303;当元数据信息中已记录该RAID组的故障处理信息时,表示该RAID组为非首次硬盘故障,则执行步骤S304。
S302b、当本次硬盘故障为该RAID组中首次硬盘故障时,RAID控制器在与该RAID组中相匹配的热备盘资源池中选择第一热备盘资源池中的第一空闲硬盘作为热备盘。
具体地,RAID控制器可以按照以下方式中的任意一种确定第一热备盘资源池:
方式一:RAID控制器在与RAID组匹配的一个或多个热备盘资源池中,按照热备盘资源池的标识依次选择一个热备盘资源池,作为第一热备盘资源池。
方式二:RAID控制器在与RAID组匹配的一个或多个热备盘资源池中随机选择一个热备盘资源池作为第一热备盘资源池。
其中,第一热备盘资源池中空闲硬盘的容量大于或等于故障硬盘的容量,且第一热备盘资源池中空闲硬盘的类型与故障硬盘的类型相同。
进一步地,在确定第一热备盘资源池后,RAID控制器可以按照如下方式中的任意一种确定第一空闲硬盘作为热备盘:
方式一:RAID控制器在第一热备盘资源池中按照硬盘的标识依次选择一个空闲硬盘作为第一空闲硬盘。
方式二:RAID控制器在第一热备盘资源池中随机选择一个空闲硬盘作为第一空闲硬盘。
S302c、当本次硬盘故障为该RAID组中非首次硬盘故障时,RAID控制器判断第一热备盘资源池剩余空闲硬盘是否与该RAID组中已使用的热备盘属于同一故障域。
具体地,当故障硬盘为该RAID组中非首次硬盘故障时,RAID控制器需要判断第一热备盘资源池中剩余的空闲硬盘是否与该RAID组中已使用的热备盘属于同一个故障域,若为同一个故障域,则执行步骤S302d;若不为同一个故障域,则执行步骤S302e。
S302d、当第一热备盘资源池剩余空闲硬盘与该RAID组中已使用的热备盘属于同一个故障域时,RAID控制器在与该RAID组匹配的热备盘资源池中选择第二热备盘资源池中的第一空闲硬盘作为热备盘。
具体地,第二热备盘资源池是与该RAID匹配的热备盘资源池中,除第一热备盘资源池外的任一热备盘资源池,第二热备盘资源池和第二热备盘资源池中第一空闲硬盘的选择方法与步骤S302b相同,在此不再赘述。其中,第二热备盘资源池的第一空闲硬盘的类型与故障硬盘的类型相同,且第二热备盘资源池的第一空闲硬盘的容量大于或等于故障硬盘的容量,以及第二热备盘资源池的第一空闲硬盘与第一热备盘资源池的第一空闲硬盘属于不同故障域。
S302e、当第一热备盘资源池剩余空闲硬盘与该RAID组中已使用的热备盘不属于同一个故障域时,RAID控制器在第一热备盘资源池中选择第二空闲硬盘作为热备盘。
具体地,RAID控制器可以按照容量、类型和故障域中的至少一种创建的资源池,当RAID控制器仅考虑容量和/或类型创建热备盘资源池时,同一热备盘资源池可能包括同一故障域的不同空闲硬盘,也可能包括不同故障域的空闲硬盘,为减少同一RAID组中已使用的同一区域的两个或两个以上热备盘再次故障所导致的数据丢失问题,RAID控制器可以在已使用的第一热备盘资源池中选择不同故障域的空闲硬盘作为热备盘,如选择第一热备盘资源池中选择第二空闲硬盘作为热备盘,第一热备盘资源池的第二空闲硬盘的容量大于或等于故障硬盘的容量,且第一热备盘资源池的第二空闲硬盘的类型与故障硬盘相同,及第一热备盘资源池中第一空闲硬盘和第二空闲硬盘属于不同故障域。当第一热备盘资源池剩余空闲硬盘与该RAID组中已使用的热备盘不属于同一个故障域时,第一热备盘资源池的第二空闲硬盘的选法与步骤S302b相同,在此不再赘述。
可选地,当第一热备盘资源池中不存在与第一热备盘资源池的第一空闲硬盘属于同一区域的空闲硬盘时,RAID控制器还可以在与该RAID组匹配的其他热备盘资源池中选择空闲硬盘作为热备盘,选择的热备盘资源池和空闲硬盘的方法与步骤S302b相同,在此不再赘述。
通过步骤S302a至S302e的描述,在同一RAID组中出现多次硬盘故障时,RAID控制器还可以根据空闲硬盘的容量、类型和故障域选择热备盘,以避免当同一RAID组中多次出现硬盘故障,且热备盘属于同一故障域时,两个热备盘再次出现故障所导致的数据丢失问题,提高应用的可靠性。
可选地,如图3B所示,RAID控制器在与RAID组匹配的热备盘资源池中选择热备盘之后,所述方法还包括:
S311、RAID控制器向存储控制器发送第一请求消息。
具体地,在如图1所示的故障管理系统中,每个业务节点的RAID控制器均会创建热备盘资源池,并建立其对应的业务节点中RAID组与热备盘资源池的映射关系,不同业务节点的RAID控制器创建的热备盘资源池中所包含的空闲硬盘可能相同,当任一业务节点的RAID控制器选择一个空闲硬盘作为热备盘时,为避免所选择的空闲硬盘已被其他RAID控制器使用,需要向所选择的空闲硬盘所在的存储节点的存储控制器发送第一请求消息,第一请求消息用于确定所选择的空闲硬盘的状态为未使用。
S312、当RAID控制器接收用于指示RAID控制器所选择的空闲硬盘的状态为未使用的第一请求消息的响应结果时,将所选择的空闲硬盘挂载到该RAID控制器所在业务节点的本地目录中,并执行故障硬盘的数据恢复处理。
具体地,当RAID控制器所选择的空闲硬盘所在的存储控制器确定该空闲硬盘的状态为“未使用“时,存储控制器向RAID控制器发送第一请求消息的响应结果指示该空闲硬盘的状态为未使用。相应的,RAID控制器在接收到第一请求消息的响应结果后,将第一空闲硬盘挂载到该RAID控制器所在业务节点的本地目录中,如在Linux系统中执行mount命令(如mount存储节点IP:空闲硬盘盘符)将存储节点的目录挂载在本地目录中,并执行故障硬盘的数据恢复处理。
其中,RAID控制器将所选择的空闲硬盘挂载到本地后,会更新本地保存的记录RAID组关系的元数据信息中故障处理信息,主要更新故障处理信息中用于恢复该故障硬盘所使用的热备盘信息,其中,热备盘信息包括热备盘的容量、类型、热备盘所在区域和其所归属的热备盘资源池。RAID控制器根据元数据信息中其他非故障的数据盘中的数据和校验盘中的数据,将故障硬盘的数据重新写入热备盘中,以此完成故障硬盘的数据恢复处理。
通过上述内容的描述,当故障处理系统中任一业务节点的RAID控制器接收到该业务节点中任一RAID组的故障硬盘的信息时,可以根据故障硬盘的信息在与该RAID组相匹配的热备盘资源池中选择一个热备盘资源池,并在该热备盘资源池中选择一个空闲硬盘作为热备盘进行数据恢复,而且,热备盘可以由存储节点的空闲硬盘以热备盘资源池形式提供,存储节点的数量可以根据业务需求不断增加,相应的,热备盘资源池中硬盘也可以不断扩充,与现有技术相比热备盘的数量不受限制,解决了现有技术中热备盘受限的问题。进一步地,考虑空闲硬盘的故障域,RAID控制器可以根据空闲硬盘的容量、类型和故障域选择空闲硬盘,避免在同一RAID组中利用同一个故障域的空闲硬盘进行数据恢复后,再次出现热备盘故障所导致的数据丢失,以此提高业务应用和整个系统的可靠性。
值得说明的是,对于上述方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制。本领域的技术人员根据以上描述的内容,能够想到的其他合理的步骤组合,也属于本发明的保护范围内。
上文中结合图1至图3B,详细描述了根据本发明实施例所提供的一种故障处理系统的方法,下面将结合图4至图6,描述根据本发明实施例所提供的故障处理的装置和设备。
图4为本发明提供的一种故障处理的装置示意图,如图所示,所述装置400包括获 取单元401、处理单元402;
所述获取单元401,用于获取RAID组中故障硬盘的信息,所述故障硬盘的信息包括所述故障硬盘的容量和类型;
所述处理单元402,用于在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复,所述热备盘资源池是所述RAID控制器预先创建,所述热备盘资源池中包括所述至少一个存储节点中的一个或多个空闲硬盘,所述RAID控制器所选择的空闲硬盘的容量大于或等于所述故障硬盘的容量,且所述RAID控制器所选择的空闲硬盘的类型与所述故障硬盘的类型相同。
应理解的是,本发明实施例的装置400可以通过专用集成电路(Application Specific Integrated Circuit,ASIC)实现,或可编程逻辑器件(Programmable Logic Device,PLD)实现,上述PLD可以是复杂程序逻辑器件(Complex Programmable Logic Device,CPLD),现场可编程门阵列(Field-Programmable Gate Array,FPGA),通用阵列逻辑(Generic Array Logic,GAL)或其任意组合。也可以通过软件实现图2至图3B所示的数据处理方法时,装置400及其各个模块也可以为软件模块。
可选地,获取单元401,还用于获取所述存储控制器发送的空闲硬盘的信息,所述空闲硬盘的信息包括所述空闲硬盘的类型和容量;
所述处理单元402,还用于创建至少一个热备盘资源池,每个热备盘资源池包括具有相同容量和相同类型的至少一个存储节点的至少一个空闲硬盘;
所述处理单元402,还用于在创建所述RAID组时,根据所述RAID组中硬盘的类型和容量确定与所述RAID组匹配的一个或多个热备盘资源池,并记录所述RAID组与所述RAID组匹配的一个或多个热备盘资源池的映射关系;
则所述处理单元402在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复具体为:
根据所述映射关系和所述获取单元401获取的故障硬盘的信息,在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复。
可选地,所述空闲硬盘的信息中还包括所述空闲硬盘的故障域的信息,所述处理单元402所选择的空闲硬盘与所述RAID组中已使用的热备盘不在同一故障域,所述故障域的信息用于标识不同硬盘所在区域的关系,同一故障域内的不同硬盘同时故障时会导致数据丢失,不同故障域内的不同硬盘同时故障时不会导致数据丢失。
可选地,所述处理单元所选择的空闲硬盘的状态为未使用。
具体地,所述装置400中处理单元402,还用于向所述存储控制器发送第一请求消息,所述第一请求消息用于确定所述控制器所选择的空闲硬盘的状态;
所述获取单元401,还用于接收用于指示所述控制器所选择的空闲硬盘的状态为未使用的所述第一请求消息的响应结果;
所述处理单元402,还用于将所选择的空闲硬盘挂载到本地,并执行所述RAID组的故障数据恢复处理。
可选地,所述处理单元选择空闲硬盘作为热备盘对所述故障硬盘的数据进行恢复具体为:
根据所述RAID组中非故障的数据盘和校验盘的数据,将所述故障硬盘数据重新写入所述RAID控制器所选择的热备盘。
根据本发明实施例的装置400可对应于执行本发明实施例中描述的方法,并且装置400中的各个单元的上述和其它操作和/或功能分别为了实现图2至图3B中的各个方法的相应流程,为了简洁,在此不再赘述。
通过以上内容的描述,本发明提供的一种装置400提供一种跨节点的热备盘实现方式,利用存储节点的空闲硬盘创建热备盘资源池,并建立热备盘资源池和RAID组的映射关系,当任一RAID组出现故障硬盘时,可以在与故障硬盘所在RAID组匹配的热备盘资源池中选择一个空闲硬盘作为热备盘,对故障硬盘数据进行恢复,其中,存储节点及存储节点中空闲硬盘的数量可以根据业务需求扩容,相应的,热备盘资源池的数量也可以不受限制,解决现有技术中利用业务节点的本地硬盘作热备盘数量受限问题,而且,对于同一RAID组中多次出现故障硬盘的情况,可以通过热备盘资源池提供多个热备盘,提高了整个系统的可靠性。另一方面,业务节点的所有本地硬盘均可以用于RAID组的数据盘或校验盘,提高了本地硬盘的利用率。
图5为本发明实施例提供的一种故障处理的设备500的示意图,如图所示,所述设备500包括处理器501、存储器502、通信接口503和总线504。其中,处理器501、存储器502、通信接口503通过总线504进行通信,也可以通过无线传输等其他手段实现通信。该存储器502用于存储指令,该处理器501用于执行该存储器502存储的指令。该存储器502存储程序代码,且处理器501可以调用存储器502中存储的程序代码执行以下操作:
获取RAID组中故障硬盘的信息,所述故障硬盘的信息包括所述故障硬盘的容量和类型;
在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复,所述热备盘资源池是所述设备500预先创建,所述热备盘资源池中包括所述至少一个存储节点中的一个或多个空闲硬盘,所述设备500所选择的空闲硬盘的容量大于或等于所述故障硬盘的容量,且所述设备500所选择的空闲硬盘的类型与所述故障硬盘的类型相同。
应理解,在本发明实施例中,该处理器501可以是CPU,该处理器501还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器502可以包括只读存储器和随机存取存储器,并向处理器501提供指令和数据。存储器502的一部分还可以包括非易失性随机存取存储器。例如,存储器502还可以存储设备类型的信息。
该总线504除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线504。
应理解,根据本发明实施例的存储设备500对应于本发明实施例图1中所述的业务节点。根据本发明实施例的故障处理的设备500可对应于本发明实施例中的故障处理的装置400,并可以对应于执行根据本发明实施例图2至图3A中的相应主体,并且设备500中的各个模块的上述和其它操作和/或功能分别为了实现图2至图3B中的各个方法的相应流程,为了简洁,在此不再赘述。
图6为本发明实施例提供的另一种故障处理的设备600的示意图,如图所示,所述设备600包括处理器601、存储器602、通信接口603、RAID卡604和总线607,处理器601、存储器602、通信接口603和RAID卡604通过总线607进行通信,也可以通过无线传输等其他手段实现通信。其中,RAID卡604中包括处理器605、存储器606、总线608,处理器605和存储器606通过总线608进行通信。该存储器606用于存储指令,该处理器605用于执行该存储器606存储的指令。该存储器606存储程序代码,且处理器605可以调用存储器606中存储的程序代码执行以下操作:
获取RAID组中故障硬盘的信息,所述故障硬盘的信息包括所述故障硬盘的容量和类型;
在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复,所述热备盘资源池是所述设备500预先创建,所述热备盘资源池中包括所述至少一个存储节点中的一个或多个空闲硬盘,所述设备600所选择的空闲硬盘的容量大于或等于所述故障硬盘的容量,且所述设备600所选择的空闲硬盘的类型与所述故障硬盘的类型相同。
应理解,在本发明实施例中,该处理器605可以是CPU,该处理器605还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器606可以包括只读存储器和随机存取存储器,并向处理器601提供指令和数据。存储器606的一部分还可以包括非易失性随机存取存储器。例如,存储器606还可以存储设备类型的信息。
该总线608和总线607除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线608和总线607。
应理解,根据本发明实施例的存储设备600对应于本发明实施例图1中所述的业务节点。根据本发明实施例的故障处理的设备600可对应于本发明实施例中的故障处理的装置400,并可以对应于执行根据本发明实施例图2至图3A中的相应主体,并且设备600中的各个模块的上述和其它操作和/或功能分别为了实现图2至图3B中的各个方法的相应流程,为了简洁,在此不再赘述。
可选地,设备600还可以是图6所示的RAID卡604。
综上所述,通过本申请提供的设备500和设备600,利用跨网络的存储节点的空闲硬盘实现热备盘资源池,并建立热备盘资源池与每个RAID组之间的映射关系,当任一RAID组出现故障硬盘时,可以在与该RAID组匹配的热备盘资源池中选择一个热备盘资源池中的一个空闲硬盘作为热备盘进行故障数据恢复,热备盘资源池中空闲硬盘的数量可以根据业务需求对存储节点中空闲硬盘的数量进行调整,以此解决现有技术中热备盘资源池中硬盘数量受限所导致的影响系统可靠性的问题。另一方面,业务节点的所有本地硬盘均可以用于RAID组的数据盘和校验盘,提高了本地硬盘的利用率。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认 为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。

Claims (10)

  1. 一种故障处理的方法,其特征在于,所述方法应用于故障处理系统中,所述故障处理系统中包括至少一个业务节点和至少一个存储节点,所述至少一个业务节点和所述至少一个存储节点之间通过网络进行通信,每个存储节点包括至少一个空闲硬盘,每个业务节点包括独立硬盘冗余阵列RAID控制器与RAID组,所述RAID控制器管理所述RAID组,所述方法包括:
    所述RAID控制器获取RAID组中故障硬盘的信息,所述故障硬盘的信息包括所述故障硬盘的容量和类型;
    所述RAID控制器在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复,所述热备盘资源池是所述RAID控制器预先创建,所述热备盘资源池中包括所述至少一个存储节点中的一个或多个空闲硬盘,所述RAID控制器所选择的空闲硬盘的容量大于或等于所述故障硬盘的容量,且所述RAID控制器所选择的空闲硬盘的类型与所述故障硬盘的类型相同。
  2. 根据权利要求1所述方法,其特征在于,所述存储节点还包括存储控制器,所述方法还包括:
    所述RAID控制器获取所述存储控制器发送的空闲硬盘的信息,所述空闲硬盘的信息包括所述空闲硬盘的类型和容量;
    所述RAID控制器创建至少一个热备盘资源池,每个热备盘资源池包括具有相同容量和/或相同类型的至少一个空闲硬盘;
    所述RAID控制器创建所述RAID组时,根据所述RAID组中硬盘的类型和容量确定与所述RAID组匹配的一个或多个热备盘资源池,并记录所述RAID组与所述RAID组匹配的一个或多个热备盘资源池的映射关系;
    则所述RAID控制器在与所述RAID组匹配的热备盘资源池中选择空闲硬盘作为热备盘对所述故障硬盘的数据进行恢复具体为:
    所述RAID控制器根据所述映射关系和所述故障硬盘的信息,在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复。
  3. 根据权利要求1至2中任一所述方法,其特征在于,所述空闲硬盘的信息中还包括硬盘的故障域的信息,所述RAID控制器所选择的空闲硬盘与所述RAID组中已使用的热备盘不在同一故障域,所述故障域的信息用于标识不同硬盘所在的区域的关系,同一故障域内的不同硬盘同时故障时会导致数据丢失,不同故障域内的不同硬盘同时故障时不会导致数据丢失。
  4. 根据权利要求1至3中任一所述方法,其特征在于,所述RAID控制器所选择的空闲硬盘的状态为未使用。
  5. 一种故障处理的装置,其特征在于,所述装置包括获取单元和处理单元;
    所述获取单元,用于获取RAID组中故障硬盘的信息,所述故障硬盘的信息包括所述故障硬盘的容量和类型;
    所述处理单元,用于在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复,所述热备盘资源池是所述RAID控制器预先创建,所述热备盘资源池中包括所述至少一个存储节点中的一个或多个空闲硬盘,所述RAID控制器所选择的空闲硬盘的容量大于或等于所述故障硬盘的容量,且所述RAID控制器所选择的空闲硬盘的类型与所述故障硬盘的类型相同。
  6. 根据权利要求5所述装置,其特征在于,
    所述获取单元,还用于获取所述存储控制器发送的空闲硬盘的信息,所述空闲硬盘的信息包括所述空闲硬盘的类型和容量;
    所述处理单元,还用于创建至少一个热备盘资源池,每个热备盘资源池包括具有相同容量和相同类型的至少一个存储节点的至少一个空闲硬盘;创建所述RAID组时,根据所述RAID组中硬盘的类型和容量确定与所述RAID组匹配的一个或多个热备盘资源池,并记录所述RAID组与所述RAID组匹配的一个或多个热备盘资源池的映射关系;
    则所述处理单元在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复具体为:
    根据所述映射关系和所述获取单元获取的故障硬盘的信息,在与所述RAID组匹配的热备盘资源池中选择空闲硬盘对所述故障硬盘的数据进行恢复。
  7. 根据权利要求5至6中任一所述装置,其特征在于,所述空闲硬盘的信息中还包括所述空闲硬盘的故障域的信息,所述RAID控制器所选择的空闲硬盘与所述RAID组中已使用的热备盘不在同一故障域,所述故障域的信息用于标识不同硬盘所在区域的关系,同一故障域内的不同硬盘同时故障时会导致数据丢失,不同故障域内的不同硬盘同时故障时不会导致数据丢失。
  8. 根据权利要求6至7中任一所述装置,其特征在于,所述处理单元所选择的空闲硬盘的状态为未使用。
  9. 一种故障处理的设备,其特征在于,所述设备包括处理器、存储器、通信接口、总线,所述处理器、存储器和通信接口之间通过总线连接并完成相互间的通信,所述存储器中用于存储计算机执行指令,所述设备运行时,所述处理器执行所述存储器中的计算机执行指令以利用所述设备中的硬件资源执行权利要求1至4中任一所述的方法。
  10. 一种故障处理的设备,其特征在于,所述设备包括RAID卡、存储器、通信接口、总线,所述RAID卡中包括处理器和存储器,所述RAID卡的处理器和RAID卡的存储器通过总线相通信,所述RAID卡、存储器、通信接口通过所述总线相互通信,所述RAID卡的存储器中用于存储计算机执行指令,所述设备运行时,所述RAID卡的处理器执行所述RAID卡的存储器中的计算机执行指令以利用所述设备中的硬件资源执行权利要求1至4中任一所述的方法。
PCT/CN2017/112358 2016-12-06 2017-11-22 一种故障处理的方法、装置和设备 WO2018103533A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/362,196 US20190220379A1 (en) 2016-12-06 2019-03-22 Troubleshooting Method, Apparatus, and Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611110928.0A CN108153622B (zh) 2016-12-06 2016-12-06 一种故障处理的方法、装置和设备
CN201611110928.0 2016-12-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/362,196 Continuation US20190220379A1 (en) 2016-12-06 2019-03-22 Troubleshooting Method, Apparatus, and Device

Publications (1)

Publication Number Publication Date
WO2018103533A1 true WO2018103533A1 (zh) 2018-06-14

Family

ID=62468352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112358 WO2018103533A1 (zh) 2016-12-06 2017-11-22 一种故障处理的方法、装置和设备

Country Status (3)

Country Link
US (1) US20190220379A1 (zh)
CN (1) CN108153622B (zh)
WO (1) WO2018103533A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113764025A (zh) * 2020-06-30 2021-12-07 北京沃东天骏信息技术有限公司 一种故障磁盘的处理方法和装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110737924B (zh) * 2018-07-20 2021-07-27 中移(苏州)软件技术有限公司 一种数据保护的方法和设备
CN109189338B (zh) * 2018-08-27 2021-06-18 郑州云海信息技术有限公司 一种热备盘添加的方法、系统及设备
CN111381770B (zh) * 2018-12-30 2021-07-06 浙江宇视科技有限公司 一种数据存储切换方法、装置、设备及存储介质
US11138042B2 (en) * 2019-04-05 2021-10-05 Grass Valley Canada System and method of identifying equivalents for task completion
CN110989923A (zh) * 2019-10-30 2020-04-10 烽火通信科技股份有限公司 一种分布式存储系统的部署方法及装置
CN110928724B (zh) * 2019-11-29 2023-04-28 重庆紫光华山智安科技有限公司 一种全局热备盘管理方法、装置、存储介质及电子设备
CN113297015A (zh) * 2020-04-07 2021-08-24 阿里巴巴集团控股有限公司 磁盘恢复方法以及装置
CN113259474B (zh) * 2021-06-10 2021-10-08 苏州浪潮智能科技有限公司 一种存储管理方法、系统、存储介质及设备
CN113254276A (zh) * 2021-06-10 2021-08-13 苏州浪潮智能科技有限公司 消除独立磁盘冗余阵列异常的方法、系统、设备及介质
US11604611B2 (en) * 2021-06-14 2023-03-14 EMC IP Holding Company LLC Variable sparing of disk drives in storage array
CN113656208B (zh) * 2021-08-17 2023-06-16 北京神州新桥科技有限公司 分布式存储系统数据处理方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625627A (zh) * 2009-08-05 2010-01-13 成都市华为赛门铁克科技有限公司 写入数据的方法、磁盘冗余阵列的控制器及磁盘冗余阵列
US20100161898A1 (en) * 2008-12-19 2010-06-24 Sunny Koul Method for preserving data integrity by breaking the redundant array of independent disks level 1(raid1)
CN102053801A (zh) * 2010-12-29 2011-05-11 成都市华为赛门铁克科技有限公司 一种磁盘热备方法及装置、存储系统
CN103019618A (zh) * 2012-11-29 2013-04-03 浪潮电子信息产业股份有限公司 一种多控器间的全局热备方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666512A (en) * 1995-02-10 1997-09-09 Hewlett-Packard Company Disk array having hot spare resources and methods for using hot spare resources to store user data
JP4842334B2 (ja) * 2009-02-12 2011-12-21 富士通株式会社 ディスクアレイ制御装置
US8086893B1 (en) * 2009-07-31 2011-12-27 Netapp, Inc. High performance pooled hot spares
CN105843557B (zh) * 2016-03-24 2019-03-08 天津书生云科技有限公司 冗余存储系统、冗余存储方法和冗余存储装置
US8959389B2 (en) * 2011-11-23 2015-02-17 International Business Machines Corporation Use of a virtual drive as a hot spare for a raid group
CN103246478B (zh) * 2012-02-08 2015-11-25 北京同有飞骥科技股份有限公司 一种基于软raid支持无分组式全局热备盘的磁盘阵列系统
US20140115579A1 (en) * 2012-10-19 2014-04-24 Jonathan Kong Datacenter storage system
US9372752B2 (en) * 2013-12-27 2016-06-21 Intel Corporation Assisted coherent shared memory
CN105335256B (zh) * 2014-08-15 2019-01-15 中国电信股份有限公司 在整机柜服务器中切换备份磁盘的方法、装置和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100161898A1 (en) * 2008-12-19 2010-06-24 Sunny Koul Method for preserving data integrity by breaking the redundant array of independent disks level 1(raid1)
CN101625627A (zh) * 2009-08-05 2010-01-13 成都市华为赛门铁克科技有限公司 写入数据的方法、磁盘冗余阵列的控制器及磁盘冗余阵列
CN102053801A (zh) * 2010-12-29 2011-05-11 成都市华为赛门铁克科技有限公司 一种磁盘热备方法及装置、存储系统
CN103019618A (zh) * 2012-11-29 2013-04-03 浪潮电子信息产业股份有限公司 一种多控器间的全局热备方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113764025A (zh) * 2020-06-30 2021-12-07 北京沃东天骏信息技术有限公司 一种故障磁盘的处理方法和装置

Also Published As

Publication number Publication date
CN108153622A (zh) 2018-06-12
US20190220379A1 (en) 2019-07-18
CN108153622B (zh) 2021-08-31

Similar Documents

Publication Publication Date Title
WO2018103533A1 (zh) 一种故障处理的方法、装置和设备
US9733853B2 (en) Using foster slice strategies for increased power efficiency
US9690663B2 (en) Allocation of replica-sets in a storage cluster
JP5523468B2 (ja) 直接接続ストレージ・システムのためのアクティブ−アクティブ・フェイルオーバー
US9430011B2 (en) Systems and methods for determining the state of health of a capacitor module
US9223658B2 (en) Managing errors in a raid
US11221935B2 (en) Information processing system, information processing system management method, and program thereof
CN106557143B (zh) 用于数据存储设备的装置和方法
US20090265510A1 (en) Systems and Methods for Distributing Hot Spare Disks In Storage Arrays
US9529674B2 (en) Storage device management of unrecoverable logical block addresses for RAID data regeneration
US11237929B2 (en) Method and apparatus, and readable storage medium
US20070050544A1 (en) System and method for storage rebuild management
WO2015058542A1 (zh) 独立冗余磁盘阵列的重构方法及装置
TWI773152B (zh) 伺服器與應用於伺服器的控制方法
US9047247B2 (en) Storage system and data processing method
TW201939506A (zh) 儲存系統
WO2016112824A1 (zh) 存储的处理方法、装置和存储设备
US10915405B2 (en) Methods for handling storage element failures to reduce storage device failure rates and devices thereof
US20230325285A1 (en) Storage host retirement and rollback
CN111290702B (zh) 一种控制设备的切换方法、控制设备、及存储系统
US9336102B2 (en) Systems and methods for preventing input/output performance decrease after disk failure in a distributed file system
WO2020113875A1 (zh) 一种控制设备的切换方法、控制设备、及存储系统
US11853163B2 (en) Selective rebuild of interrupted devices in data storage device arrays
US11366618B2 (en) All flash array server and control method thereof
US11467930B2 (en) Distributed failover of a back-end storage director

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17879415

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17879415

Country of ref document: EP

Kind code of ref document: A1