CN111857577A - Method and device for managing physical hard disk in distributed storage system - Google Patents

Method and device for managing physical hard disk in distributed storage system Download PDF

Info

Publication number
CN111857577A
CN111857577A CN202010604671.4A CN202010604671A CN111857577A CN 111857577 A CN111857577 A CN 111857577A CN 202010604671 A CN202010604671 A CN 202010604671A CN 111857577 A CN111857577 A CN 111857577A
Authority
CN
China
Prior art keywords
osd
hard disk
event
equipment
disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010604671.4A
Other languages
Chinese (zh)
Other versions
CN111857577B (en
Inventor
李玉冰
李庆林
蓝海
张书东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fiberhome Telecommunication Technologies Co Ltd
Original Assignee
Fiberhome Telecommunication Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fiberhome Telecommunication Technologies Co Ltd filed Critical Fiberhome Telecommunication Technologies Co Ltd
Priority to CN202010604671.4A priority Critical patent/CN111857577B/en
Publication of CN111857577A publication Critical patent/CN111857577A/en
Application granted granted Critical
Publication of CN111857577B publication Critical patent/CN111857577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays

Abstract

The invention discloses a management method of a physical hard disk under a distributed storage system, which comprises the following steps: creating OSD through the distributed storage management platform, and writing the positioning information such as OSD ID and OSD cluster identification of OSD equipment in the distributed storage system and the logic information of the logical volume group and the physical volume group of the LVM into the physical hard disk through the log of the LVM; the method comprises the steps of acquiring a disk pulling event or a disk inserting event by monitoring a UDEV event of hardware equipment in a kernel, acquiring an ESN (electronic storage network), a WWN (hard disk) and a disk identifier of a physical hard disk which is pulled out or inserted, finding an OSD (on screen display) ID (identity) and an OSD cluster identifier of OSD (on screen display) equipment in a corresponding distributed storage system, and modifying the running state of the OSD equipment in the distributed storage cluster through the OSD ID and the OSD cluster identifier of the OSD equipment. The invention can trigger the state updating of the OSD equipment of the distributed storage system in real time by utilizing the monitoring of the kernel UDEV event of the Linux, and realizes the automatic isolation and recovery of the fault hard disk. The invention also provides a corresponding management device of the physical hard disk in the distributed storage system.

Description

Method and device for managing physical hard disk in distributed storage system
Technical Field
The invention belongs to the technical field of distributed storage hard disk management, and particularly relates to a method and a device for managing a physical hard disk in a distributed storage system.
Background
With the continuous development of social informatization, huge-scale data is generated in a plurality of application fields, particularly in the video industry, and the data volume is still increasing sharply. To increase the speed of data access and storage scalability, large amounts of data are typically stored distributed across multiple nodes in different data centers to support parallel access. In a cloud computing environment, organization and management of mass data storage represents higher requirements in terms of scalability, fault tolerance, and cost control, and ease of use and scalability of storage are also becoming more and more important. Therefore, distributed storage is also becoming more and more important, and the stability of distributed storage systems is also becoming an important factor in assessing storage systems. Since the storage device in the distributed storage system is composed of a large number of physical hard disks in the server, the state of the physical hard disks becomes an important factor for determining the stability of the distributed storage system.
In a distributed storage system, data is generally stored on a plurality of nodes and a plurality of mechanical hard disks in order to improve the reliability of storage. Since the storage performance of multiple mechanical hard disks is limited by the worst-performing hard disk, in order to reduce the difference, a solid state hard disk and a mechanical hard disk are combined to create an OSD, or aggregated to create an OSD after being aggregated into a DM device. When creating OSD using DM devices such as LVM or FLCD, if a physical hard disk fails, an upper layer service using a distributed storage system is greatly affected, and even a service interruption problem occurs, which seriously affects the stability of the service. In order to solve the influence of the hard disk failure problem, the OSD state in the distributed storage system needs to be timely acquired and updated, and DM and LVM information residue caused by hard disk offline is cleared; when the physical hard disk with the created OSD is inserted into the system, the storage system needs to recover the information of the lost DM and LVM in time, and add the OSD into the distributed storage system again to ensure that the distributed storage system operates healthily and stably.
Disclosure of Invention
The patent provides a management method of a physical hard disk in a distributed storage system, which realizes the automatic clearing and recovery of LVM and DM equipment information in the distributed storage system, and the automatic isolation of a fault hard disk and the automatic recovery of a lost hard disk by acquiring hardware equipment events in a Linux system.
To achieve the above object, according to an aspect of the present invention, there is provided a method for managing a physical hard disk in a distributed storage system, including:
s1, creating OSD through the distributed storage management platform, and writing the positioning information such as OSDID and OSD cluster identification of the OSD equipment in the distributed storage system and the logic information of the logical volume group and the physical volume group of the LVM into the physical hard disk through the log of the LVM;
s2, acquiring a disk pulling event or a disk inserting event by monitoring a UDEV event of hardware equipment in a kernel, acquiring an ESN (electronic storage network), a WWN (world wide web) of a hard disk and a disk identifier of the hard disk which is pulled out or inserted into a physical hard disk, finding an OSD (on screen display) ID (identity) and an OSD cluster identifier of OSD (on screen display) equipment in a corresponding distributed storage system, and then modifying the running state of the OSD equipment in the distributed storage cluster by the OSD ID and the OSD cluster identifier of the OSD equipment.
In an embodiment of the present invention, the step S1 includes:
S11, creating different equipment combinations into a Device-mapper according to the service requirements, and writing the WWN and ESN of the physical hard disk into the corresponding equipment configuration when creating the Device-mapper equipment;
s12, adding the created Device-mapper equipment or the physical hard disk into the physical volume group in the LVM;
s13, adding different physical volume groups into different LVM logical volume groups according to service requirements;
s14, according to the logical volume group of the LVM, creating logical volume equipment corresponding to the service, including an OSD data disc, an OSD metadata disc and an OSD log disc, and writing the created OSD ID, the unique OSD cluster identifier and the volume group information of the logical volume into a physical hard disk or DM equipment through the LVM log information;
s15, formatting the created logical volume device and mounting; when the logical volume device is formatted, the OSD ID, the OSD cluster identifier, the OSD data disk path, the OSD metadata disk path, and the OSD log disk path of the OSD device need to be written into the formatted logical volume device;
and S16, adding the created OSD equipment into the distributed storage system and starting the OSD equipment, and modifying the running state of the OSD equipment in the cluster into a starting state.
In an embodiment of the present invention, the processing manner for the disc eject event in step S2 is as follows:
Monitoring a disc pulling event in a Linux system by using a Linux UDEV event interface;
acquiring the disk identifier of the pulled hard disk through the disk pulling event, and finding out related OSD equipment in the distributed storage cluster through the hard disk identifier of the pulled physical hard disk;
and processing and isolating the OSD equipment information through the found OSD equipment corresponding to the pulled hard disk.
In an embodiment of the present invention, the obtaining a disk identifier of a pulled hard disk through a disk pulling event, and finding a relevant OSD device in a distributed storage cluster through a hard disk identifier of a pulled physical hard disk includes:
s216, acquiring a disk identifier, an ESN and a WWN of the pulled hard disk through a disk pulling event;
s217, searching absolute paths of storage devices used by all OSD devices in the current system and OSD IDs (on screen display) of the corresponding OSD devices through mounting points of the OSD devices in the Linux system;
s218, searching a main device number and a secondary device number of the OSD device in Linux according to the obtained absolute path of the OSD device;
s219, judging whether the equipment has the sub-equipment or not according to the main equipment number and the sub-equipment number, if so, turning to S218, continuing to search the main equipment number and the sub-equipment number of the sub-equipment, and if not, turning to S220;
S220, acquiring a hard disk drive identifier of the sub-equipment in Linux;
and S221, comparing the disk identifier of the sub-device acquired in S220 with the disk identifier of the pulled hard disk acquired in S216, if the disk identifiers are matched, determining the OSD device corresponding to the pulled hard disk, otherwise, turning to S217, and continuously searching for the next OSD device mounting point.
In an embodiment of the present invention, the processing and isolating OSD device information according to the found OSD device corresponding to the pulled hard disk includes:
s222, updating the state of the OSD equipment corresponding to the pulled hard disk in the distributed storage system to be a fault state, stopping all services of the OSD equipment corresponding to the pulled hard disk, modifying the state of the OSD equipment corresponding to the pulled hard disk to be an unhealthy state, and isolating the OSD equipment corresponding to the pulled hard disk;
s223, unloading the mounting point of the OSD device corresponding to the pulled hard disk in Linux, and releasing the OSD device corresponding to the pulled hard disk;
s224, deleting the LVM information left in the Linux system by the pulled hard disk according to the OSD device corresponding to the pulled hard disk, updating the LVM configuration, and deleting the logical volume of the pulled hard disk, wherein the logical volume is a logical volume group;
s225, deleting the path information of the mapping logic device left in the Linux system by the matched physical hard disk and DM device, updating the FLCD configuration, and deleting the path information of the pulled hard disk and the ESN and WWN information of the pulled hard disk in the FLCD configuration.
In an embodiment of the present invention, the processing manner for the disc eject event in step S2 is as follows:
monitoring a plug-in disk event in a Linux system by utilizing a Linux UDEV event interface;
ESN and WWN of the inserted physical hard disk are obtained through the disk inserting event, and relevant OSD equipment in the distributed storage cluster is found through the ESN and WWN of the inserted physical hard disk to obtain OSD equipment information;
the OSD device is processed by having obtained the OSD device information.
In an embodiment of the present invention, the obtaining, through a disk plugging event, ESN and WWN inserted into a physical hard disk, and finding, through ESN and WWN inserted into a physical hard disk, a relevant OSD device in a distributed storage cluster to obtain OSD device information includes:
s236, acquiring ESN and WWN of the inserted hard disk through the disk inserting event;
s237, acquiring ESN and WWN of a hard disk existing in a Linux system;
s238, reading the configuration of the aggregation equipment in the hard disk of the system obtained in S236 and S237 after the hard disk is inserted, judging whether the configuration is matched through the ESN and WWN of the hard disk in the system obtained in S237 and the ESN and WWN of the hard disk in the system obtained in S236, if the configuration is matched, determining that the aggregation equipment exists, and turning to S240, otherwise, turning to 239;
s239, traversing all hard disks of the Linux system, if judging that the hard disks exist, turning to S237 to continuously acquire the hard disks, and otherwise, turning to S241;
S240, establishing hard disk aggregation equipment according to the ESN and WWN of the matched hard disks of the aggregation equipment, and updating information of physical volumes, logical volume groups and logical volumes of the LVM;
s241, reading the LVM configuration of the built aggregation equipment in the S240 or the LVM configuration of the physical hard disk, which is not matched with the aggregation equipment and is inserted into the Linux system, of the aggregation equipment in the S238-S239, loading the information of the physical volume, the logical volume group and the logical volume of the LVM, and updating the LVM log information;
s242 acquires log information of the LVM device loaded in S241;
s243, judging whether OSD information of the distributed storage cluster exists or not according to the log information of the loaded LVM equipment, and turning to S244 if the OSD information of the distributed storage cluster exists;
and S244, acquiring the OSD ID and the OSD cluster identifier of the OSD device from the LVM log information obtained in the S242.
In an embodiment of the present invention, the processing the OSD device by obtaining the OSD device information includes:
s245, mounting the OSD equipment through the obtained OSD ID and the OSD cluster identifier;
s246, starting the OSD device service, modifying the state of the OSD device in the distributed storage system to be the running state, adding the OSD device into the distributed storage cluster, and modifying the state of the OSD device to be the healthy state.
According to another aspect of the present invention, there is also provided a management apparatus for a physical hard disk in a distributed storage system, including an OSD device creating module and a plug-in/pull-out event processing module, wherein:
The OSD device creating module is used for creating OSD through the distributed storage management platform and writing the positioning information such as OSD ID, OSD cluster identification and the like of the OSD devices in the distributed storage system and the logic information of the logical volume group and the physical volume group of the LVM into the physical hard disk through the log of the LVM;
the plug-in and plug-out event processing module is used for acquiring a plug-in and plug-out event or a plug-in event by monitoring a UDEV event of hardware equipment in the kernel, acquiring an ESN (electronic file network), a WWN (hard disk world Wide Web) and a disk identifier of a plug-out or plug-in physical hard disk, finding an OSD (on screen display) ID (identity) and an OSD cluster identifier of OSD (on screen display) equipment in a corresponding distributed storage system, and then modifying the running state of the OSD equipment in the distributed storage cluster through the OSD ID and the OSD cluster identifier of the OSD equipment.
In an embodiment of the present invention, the plug-in/plug-out event processing module includes a plug-in/plug-out event processing sub-module and a plug-in/plug-out event processing sub-module, where:
the dial event processing submodule comprises a dial event monitoring subunit, a dial event OSD device searching subunit and a dial event OSD device processing subunit, wherein:
the disc pulling event monitoring subunit is used for monitoring disc pulling events in a Linux system by utilizing a Linux UDEV event interface;
The device comprises a pull event OSD device searching subunit, a pull event processing subunit and a pull event processing subunit, wherein the pull event OSD device searching subunit is used for acquiring the drive letter of the pulled hard disk through the pull event and finding the related OSD device in the distributed storage cluster through the hard disk drive letter of the pulled physical hard disk;
the device comprises a dial event OSD device processing subunit, a hard disk processing unit and a data processing unit, wherein the dial event OSD device processing subunit is used for processing and isolating OSD device information through the found OSD device corresponding to the pulled hard disk;
the plug-in event processing submodule comprises a plug-in event monitoring subunit, a plug-in event OSD device searching subunit and a plug-in event OSD device processing subunit, wherein:
the disc insertion event monitoring subunit is used for monitoring disc insertion events in the Linux system by using a Linux UDEV event interface;
the system comprises a disk insertion event OSD device searching subunit, a disk insertion event processing subunit and a distributed storage cluster searching subunit, wherein the disk insertion event OSD device searching subunit is used for obtaining ESN and WWN of an inserted physical hard disk through a disk insertion event, finding related OSD devices in the distributed storage cluster through the ESN and WWN of the inserted physical hard disk and obtaining OSD device information;
and the OSD device processing subunit of the plug-in event is used for processing the OSD device by the obtained OSD device information.
Generally, compared with the prior art, the technical scheme of the invention has the following beneficial effects:
(1) By the technical scheme, the monitoring of the kernel UDEV event of the Linux is utilized, the state updating of the OSD equipment of the distributed storage system can be triggered in real time, the automatic isolation and recovery of a fault hard disk can be realized, the influence of the hard disk on the distributed storage system can be reduced, and the problem of read-write interruption of upper-layer services caused by the fault of the hard disk can be particularly solved;
(2) by the technical scheme, the corresponding relation between the fault hard disk and the Device-mapper can be obtained by processing the sub-Device of the hard disk Device in the Linux system, the fault legacy is cleared, and the problems that the system resources are occupied by the legacy information of the hard disk and the DM information of the system is disordered due to the loss of the hard disk are solved;
(3) according to the technical scheme, the establishment of the aggregation equipment and the recovery of the OSD equipment can be automatically completed after the recovery of the fault hard disk is realized through the configuration processing of the FLCD and the maintenance of the LVM log information, so that the healthy and stable operation of the distributed cluster is ensured;
(4) by the technical scheme of the invention, the corresponding relation between the physical hard disk and the OSD can be found by maintaining the ESN and WWN information of the hard disk, and the problem of information confusion of the OSD equipment of the distributed storage system caused by the disk character drift of the hard disk is avoided.
Drawings
FIG. 1 is a flowchart illustrating a method for managing physical hard disks in a distributed storage system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a principle of creating an OSD device according to an embodiment of the invention;
FIG. 3 is a schematic flowchart illustrating a process of creating an OSD device according to an embodiment of the invention;
FIG. 4 is a schematic diagram illustrating a mapping relationship between a DM device and a physical hard disk according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for handling a unplug event according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for processing a plug-in event according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a management apparatus for physical hard disks in a distributed storage system according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a plug disk event processing module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The terms of art to which this invention relates are explained or explained first:
OSD: (Object Storage Device), the Storage Device of the distributed Storage system, is the basic unit that handles Storage, replication, recovery, backfill, rebalancing of data;
OSD ID: (Object Storage Device Identification), the Device number of the distributed Storage system, and the unique value in the distributed Storage system;
hard disk ESN: (Electronic Serial Number) unique identification code of physical hard disk;
DM: (Device Mapper), a mapping Device for logical volumes in the Linux system;
and (3) LVM: (Logical Volume Manager), the virtual device driver for hard disk management in the Linux system can be free from physical device limitation, and the size of the hard disk can be adjusted dynamically at will;
FLCD: (Flashcache Device, fast cache Device), based on Device-Mapper framework, used to aggregate multiple different media hard disks into a Device software;
UDEV: (Userspace Device), a Device manager in the Linux system, when the hardware Device changes, the kernel event can be captured;
Hard disk WWN: (World Wide Name, globally unique Name), unique identification of the physical hard disk in the Linux system;
RAID: (Redundant Array of Independent Disks).
The physical hard disk refers to a general name of a mechanical hard disk and a solid state hard disk in the invention.
Since the LVM information on the hard disk is lost as the physical hard disk is pulled out. Therefore, it is necessary to find the location information of the OSD ID and OSD cluster identification information of the OSD device in the corresponding distributed storage system through UDEV event monitoring of the hardware device in the kernel, and the obtained ESN and hard disk drive identifier for pulling out and inserting the hard disk, and then update the running state (running, failure) and the health state (health, error) of the OSD device in time through the OSD device location information. The disk signature is generated when the hard disk is inserted into the system, and changes with repeated insertion and removal of the hard disk, so that the disk signature is unreliable. And the influence of the disk character drift of the hard disk is avoided. According to the hard disk information in the Linux system, the following information is known:
(1) the ESN of the physical hard disk is globally unique, and the WWN of the hard disk is unique under a Linux system and cannot change along with the change of a hard disk identifier;
(2) The DM equipment is generated by using the physical hard disk, and the combined equipment information of the hard disk cannot be lost along with the pulling-out of the hard disk;
(3) the hard disk mounting point in the Linux system cannot be lost along with the pulling-out of the hard disk;
(4) the hard disk device used by the OSD data disk, the log disk and the metadata disk can be found in the mounting catalog of the OSD.
Example 1
As shown in fig. 1, the present invention provides a method for managing a physical hard disk in a distributed storage system, including:
s1, creating OSD through the distributed storage management platform, and writing the positioning information such as OSDID and OSD cluster identification of the OSD equipment in the distributed storage system and the logic information of the logical volume group and the physical volume group of the LVM into the physical hard disk through the log of the LVM;
s2, acquiring ESN, hard disk WWN and hard disk drive characters of the unplugged and plugged physical hard disks by monitoring the UDEV events of the hardware devices in the kernel, finding the OSD ID and OSD cluster identification of the OSD devices in the corresponding distributed storage system, and then modifying the running state of the OSD devices in the distributed storage cluster by the OSD ID and OSD cluster identification of the OSD devices.
Example 2
As shown in fig. 2, which is a schematic diagram illustrating a principle of creating an OSD device in an embodiment of the present invention, when the OSD device in the distributed storage system is created, different device types may be selected, for example, a mechanical hard disk, a solid state hard disk, and various combination devices DM, FLCD and the like of the mechanical hard disk and the solid state hard disk. Fig. 3 is a schematic flowchart illustrating a process of creating an OSD device according to an embodiment of the present invention, and with reference to fig. 2 and 3, the process of creating the OSD device includes:
S11, different equipment combinations can be created into a Device-mapper according to the service requirements, and when the Device-mapper equipment is created, the WWN and the ESN of the physical hard disk can be written into the corresponding equipment configuration, such as FLCD configuration;
s12, adding the created Device-mapper equipment or the physical hard disk into the physical volume group in the LVM;
s13, adding different physical volume groups into different LVM logical volume groups according to service requirements;
s14, according to the logical volume group of the LVM, creating logical volume equipment corresponding to the service, including an OSD data disc, an OSD metadata disc and an OSD log disc, and writing the created OSD ID, the unique OSD cluster identifier and the volume group information of the logical volume into a physical hard disk or DM equipment through the LVM log information;
s15, formatting the created logical volume device and mounting; when the logical volume device is formatted, the OSD ID, the OSD cluster identifier, the OSD data disk path, the OSD metadata disk path, and the OSD log disk path of the OSD device need to be written into the formatted logical volume device;
and S16, adding the created OSD equipment into the distributed storage system and starting the OSD equipment, and modifying the running state of the OSD equipment in the cluster into a starting state.
Example 3
Fig. 4 shows a mapping relationship between a DM device and a physical hard disk in the embodiment of the present invention. The Device-mapper Device is in a Linux system, filtering or redirecting a service read-write request is realized through a modular Target Driver plug-in a kernel, and the currently realized Target Driver plug-in comprises a soft RAID, a logic volume strip, a multi-path, a fast cache, a mirror image, a snapshot and the like. The Device-mapper is composed of one or more block devices, which may be physical hard disks or hard disk partitions, or Device-mapper devices, i.e., Device-mapper devices may be iterated continuously. As shown in fig. 4, in order to acquire all the relevant physical hard disks of the Device-mapper Device 1, the following method may be adopted:
(1) searching a main Device number and a secondary Device number of a Device-mapper 1 in the Linux system;
(2) through the main equipment number and the secondary equipment number, the Device-mapper 1 sub-equipment, namely Device-mapper 2, a physical hard disk sdb and a physical hard disk sdc, can be obtained at a Target Driver layer;
(3) searching a main Device number and a secondary Device number of a Device-mapper 2 in the Linux system;
(4) through the main Device number and the secondary Device number, the sub-Device of the Device-mapper 2, namely the Device-mapper 3 and the physical hard disk sda, can be acquired at the Target Driver layer;
(5) By analogy, the child Device of Device-mapper 3 needs to be acquired.
Example 4
As shown in fig. 5, which is a schematic flow chart of a method for handling a unplug event in an embodiment of the present invention, a handling process of the unplug event includes:
(1) monitoring a disc pulling event in a Linux system by using a Linux UDEV event interface, and the steps are as follows:
s211, monitoring hardware equipment events in the Linux system by using a Linux UDEV event interface, and turning to S212 if an inner core triggers an event;
s212, filtering the kernel event, if the event is a dial-up event, turning to S213, and if not, turning to S211, and continuing to monitor the kernel event;
s213, putting the dial-up event into an event queue, and continuing to monitor the kernel event in S211;
s214, acquiring queue elements in the event queue;
s215, judging whether the elements in the event queue are empty, if the queue is not empty, turning to S216, otherwise, turning to S214 to continue to obtain the queue elements;
(2) the method comprises the following steps of obtaining the disk identifier of the pulled hard disk through a disk pulling event, finding out related OSD equipment in the distributed storage cluster through the hard disk identifier of the pulled physical hard disk, and the steps are as follows:
s216, acquiring a disk identifier, an ESN and a WWN of the pulled hard disk through a disk pulling event;
S217, searching absolute paths of storage devices (a data disk, a log disk and a metadata disk) used by all OSD devices in the current system and OSD IDs (on screen display IDs) of the corresponding OSD devices through mounting points of the OSD devices in the Linux system;
s218, searching a main device number and a secondary device number of the OSD device in Linux according to the obtained absolute path of the OSD device;
and S219, judging whether the equipment has the sub-equipment or not according to the main equipment number and the sub-equipment number, if so, turning to S218, and continuously searching the main equipment number and the sub-equipment number of the sub-equipment. If not, go to S220;
s220, acquiring a hard disk drive identifier of the sub-equipment in Linux;
s221, comparing the drive symbol of the sub-device obtained in S220 with the drive symbol of the pulled hard disk obtained in S216, if the drive symbol is matched with the drive symbol of the pulled hard disk, determining the OSD device corresponding to the pulled hard disk, turning to S222, otherwise, turning to S217, and continuing to search a next OSD device mounting point;
(3) through the found OSD equipment corresponding to the pulled-out hard disk, the information of the OSD equipment is processed and isolated, and the method comprises the following steps:
s222, updating the state of the OSD equipment corresponding to the pulled hard disk in the distributed storage system to be a fault state, stopping all services of the OSD equipment corresponding to the pulled hard disk, modifying the state of the OSD equipment corresponding to the pulled hard disk to be an unhealthy state, and isolating the OSD equipment corresponding to the pulled hard disk;
The state of the OSD device in the distributed storage system includes the running state (running, failure) of the service of the OSD device in the distributed storage cluster, and the health state (health, error) of the OSD device in the distributed storage cluster;
s223, unloading the mounting point of the OSD device corresponding to the pulled hard disk in Linux, and releasing the OSD device corresponding to the pulled hard disk;
s224, deleting LVM information left in the Linux system by the pulled hard disk according to OSD equipment (data equipment, metadata equipment and log equipment) corresponding to the pulled hard disk, updating LVM configuration, and deleting a logical volume of the pulled hard disk, wherein the logical volume is a composition physical volume group;
s225, deleting the path information of the mapping logic device left by the DM in the Linux system of the physical hard disk and the DM matched in the S221, updating the FLCD configuration, deleting the path information of the pulled hard disk and the ESN and WWN information of the pulled hard disk in the FLCD configuration, and going to S214 to obtain the event queue element.
Example 5
As shown in fig. 6, which is a schematic flow chart of a method for processing a plug-in event in an embodiment of the present invention, a processing process of the plug-in event includes:
(1) monitoring a disk inserting event in a Linux system by utilizing a Linux UDEV event interface, wherein the method comprises the following steps:
S231, monitoring a hardware device event in the Linux system by using a Linux UDEV event interface, and turning to S232 if an inner core triggers the event;
s232, filtering the kernel event, if the event is a plug-in event, turning to S233, if not, turning to S231, and continuing to monitor the kernel event;
s233, putting the disk inserting event in the Linux kernel into an event queue, and turning to S231 to continue monitoring the kernel event;
s234, acquiring queue elements in the event queue;
s235, judging whether the elements in the event queue are empty, if the queue is not empty, turning to S236, otherwise, turning to S234 to continue to obtain the queue elements;
(2) ESN and WWN which are inserted into the physical hard disk are obtained through the disk inserting event, relevant OSD equipment in the distributed storage cluster is found through the ESN and WWN which are inserted into the physical hard disk, and OSD equipment information is obtained, and the steps are as follows:
s236, acquiring ESN and WWN of the inserted hard disk from the Linux system kernel disk inserting event;
s237, acquiring ESN and WWN of a hard disk existing in a Linux system;
s238, reading the configuration (ESN and WWN combination information) of the aggregation device in the hard disk of the system obtained in S236 and S237 after the hard disk has been inserted, determining whether the configuration matches by using the ESN and WWN of the hard disk in the system obtained in S237 and the ESN and WWN of the hard disk in the system, if the configuration matches, determining that there is an aggregation device, and going to S240, otherwise, going to 239;
S239, traversing all hard disks of the Linux system, if judging that the hard disks exist, turning to S237 to continuously acquire the hard disks, and otherwise, turning to S241;
s240, establishing hard disk aggregation equipment according to the ESN and WWN of the matched hard disks of the aggregation equipment, and updating information of physical volumes, logical volume groups and logical volumes of the LVM;
s241, reading the LVM configuration of the built aggregation equipment in the S240 or the LVM configuration of the physical hard disk, which is not matched with the aggregation equipment and is inserted into the Linux system, of the aggregation equipment in the S238-S239, loading the information of the physical volume, the logical volume group and the logical volume of the LVM, and updating the LVM log information;
s242 acquires log information of the LVM device loaded in S241;
s243, judging whether OSD information of the distributed storage cluster exists or not according to the log information of the loaded LVM equipment, if so, turning to S244, otherwise, turning to S234, and acquiring a new plug-in event;
s244, obtaining the OSD ID and OSD cluster identification of the OSD device from the LVM log information obtained in the S242;
(3) the OSD device is processed by obtaining the OSD device information, and the operation is as follows:
s245, mounting the OSD equipment through the obtained OSD ID and the OSD cluster identifier;
s246, starting the OSD device service, modifying the state of the OSD device in the distributed storage system to be the running state, adding the OSD device into the distributed storage cluster, and modifying the state of the OSD device to be the healthy state.
Example 6
As shown in fig. 7, an embodiment of the present invention provides a management apparatus for a physical hard disk in a distributed storage system, including an OSD device creating module and a plug-in/pull-out event processing module, where:
the OSD device creating module is used for creating OSD through the distributed storage management platform and writing the positioning information such as OSD ID, OSD cluster identification and the like of the OSD devices in the distributed storage system and the logic information of the logical volume group and the physical volume group of the LVM into the physical hard disk through the log of the LVM;
the plug-in and plug-out event processing module is used for acquiring a plug-in and plug-out event or a plug-in event by monitoring a UDEV event of hardware equipment in the kernel, acquiring an ESN (electronic file network), a WWN (hard disk world Wide Web) and a disk identifier of a plug-out or plug-in physical hard disk, finding an OSD (on screen display) ID (identity) and an OSD cluster identifier of OSD (on screen display) equipment in a corresponding distributed storage system, and then modifying the running state of the OSD equipment in the distributed storage cluster through the OSD ID and the OSD cluster identifier of the OSD equipment.
Further, as shown in fig. 8, an embodiment of the present invention provides a schematic structural diagram of a plug disc event processing module, where the plug disc event processing module includes a plug disc event processing sub-module and a plug disc event processing sub-module, where:
The dial event processing submodule comprises a dial event monitoring subunit, a dial event OSD device searching subunit and a dial event OSD device processing subunit, wherein:
the disc pulling event monitoring subunit is used for monitoring disc pulling events in a Linux system by utilizing a Linux UDEV event interface;
the device comprises a pull event OSD device searching subunit, a pull event processing subunit and a pull event processing subunit, wherein the pull event OSD device searching subunit is used for acquiring the drive letter of the pulled hard disk through the pull event and finding the related OSD device in the distributed storage cluster through the hard disk drive letter of the pulled physical hard disk;
the device comprises a dial event OSD device processing subunit, a hard disk processing unit and a data processing unit, wherein the dial event OSD device processing subunit is used for processing and isolating OSD device information through the found OSD device corresponding to the pulled hard disk;
the plug-in event processing submodule comprises a plug-in event monitoring subunit, a plug-in event OSD device searching subunit and a plug-in event OSD device processing subunit, wherein:
the disc insertion event monitoring subunit is used for monitoring disc insertion events in the Linux system by using a Linux UDEV event interface;
the system comprises a disk insertion event OSD device searching subunit, a disk insertion event processing subunit and a distributed storage cluster searching subunit, wherein the disk insertion event OSD device searching subunit is used for obtaining ESN and WWN of an inserted physical hard disk through a disk insertion event, finding related OSD devices in the distributed storage cluster through the ESN and WWN of the inserted physical hard disk and obtaining OSD device information;
And the OSD device processing subunit of the plug-in event is used for processing the OSD device by the obtained OSD device information.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A management method of a physical hard disk under a distributed storage system is characterized by comprising the following steps:
s1, creating OSD through the distributed storage management platform, and writing the positioning information such as OSD ID and OSD cluster identification of the OSD equipment in the distributed storage system and the logic information of the logical volume group and the physical volume group of the LVM into the physical hard disk through the log of the LVM;
s2, acquiring a disk pulling event or a disk inserting event by monitoring a UDEV event of hardware equipment in a kernel, acquiring an ESN (electronic storage network), a WWN (world wide web) of a hard disk and a disk identifier of the hard disk which is pulled out or inserted into a physical hard disk, finding an OSD (on screen display) ID (identity) and an OSD cluster identifier of OSD (on screen display) equipment in a corresponding distributed storage system, and then modifying the running state of the OSD equipment in the distributed storage cluster by the OSD ID and the OSD cluster identifier of the OSD equipment.
2. The method for managing physical hard disks under a distributed storage system according to claim 1, wherein said step S1 includes:
s11, creating different equipment combinations into a Device-mapper according to the service requirements, and writing the WWN and ESN of the physical hard disk into the corresponding equipment configuration when creating the Device-mapper equipment;
s12, adding the created Device-mapper equipment or the physical hard disk into the physical volume group in the LVM;
s13, adding different physical volume groups into different LVM logical volume groups according to service requirements;
s14, according to the logical volume group of the LVM, creating logical volume equipment corresponding to the service, including an OSD data disc, an OSD metadata disc and an OSD log disc, and writing the created OSD ID, the unique OSD cluster identifier and the volume group information of the logical volume into a physical hard disk or DM equipment through the LVM log information;
s15, formatting the created logical volume device and mounting; when the logical volume device is formatted, the OSDID, the OSD cluster identifier, the OSD data disk path, the OSD metadata disk path, and the OSD log disk path of the OSD device need to be written into the formatted logical volume device;
and S16, adding the created OSD equipment into the distributed storage system and starting the OSD equipment, and modifying the running state of the OSD equipment in the cluster into a starting state.
3. The method for managing physical hard disks under a distributed storage system according to claim 1 or 2, wherein the processing manner for the disk-out event in step S2 is as follows:
monitoring a disc pulling event in a Linux system by using a Linux UDEV event interface;
acquiring the disk identifier of the pulled hard disk through the disk pulling event, and finding out related OSD equipment in the distributed storage cluster through the hard disk identifier of the pulled physical hard disk;
and processing and isolating the OSD equipment information through the found OSD equipment corresponding to the pulled hard disk.
4. The method for managing physical hard disks in a distributed storage system according to claim 3, wherein the obtaining the disk identifier of the unplugged hard disk through the disk unplugging event, and finding the OSD device related to the distributed storage cluster through the hard disk identifier of the unplugged physical hard disk comprises:
s216, acquiring a disk identifier, an ESN and a WWN of the pulled hard disk through a disk pulling event;
s217, searching absolute paths of storage devices used by all OSD devices in the current system and OSD IDs (on screen display) of the corresponding OSD devices through mounting points of the OSD devices in the Linux system;
s218, searching a main device number and a secondary device number of the OSD device in Linux according to the obtained absolute path of the OSD device;
S219, judging whether the equipment has the sub-equipment or not according to the main equipment number and the sub-equipment number, if so, turning to S218, continuing to search the main equipment number and the sub-equipment number of the sub-equipment, and if not, turning to S220;
s220, acquiring a hard disk drive identifier of the sub-equipment in Linux;
and S221, comparing the disk identifier of the sub-device acquired in S220 with the disk identifier of the pulled hard disk acquired in S216, if the disk identifiers are matched, determining the OSD device corresponding to the pulled hard disk, otherwise, turning to S217, and continuously searching for the next OSD device mounting point.
5. The method for managing physical hard disks under a distributed storage system according to claim 3, wherein the processing and isolating OSD device information through the OSD devices corresponding to the found pulled hard disks includes:
s222, updating the state of the OSD equipment corresponding to the pulled hard disk in the distributed storage system to be a fault state, stopping all services of the OSD equipment corresponding to the pulled hard disk, modifying the state of the OSD equipment corresponding to the pulled hard disk to be an unhealthy state, and isolating the OSD equipment corresponding to the pulled hard disk;
s223, unloading the mounting point of the OSD device corresponding to the pulled hard disk in Linux, and releasing the OSD device corresponding to the pulled hard disk;
S224, deleting the LVM information left in the Linux system by the pulled hard disk according to the OSD device corresponding to the pulled hard disk, updating the LVM configuration, and deleting the logical volume of the pulled hard disk, wherein the logical volume is a logical volume group;
s225, deleting the path information of the mapping logic device left in the Linux system by the matched physical hard disk and DM device, updating the FLCD configuration, and deleting the path information of the pulled hard disk and the ESN and WWN information of the pulled hard disk in the FLCD configuration.
6. The method for managing physical hard disks under a distributed storage system according to claim 1 or 2, wherein the processing manner for the disk-out event in step S2 is as follows:
monitoring a plug-in disk event in a Linux system by utilizing a Linux UDEV event interface;
ESN and WWN of the inserted physical hard disk are obtained through the disk inserting event, and relevant OSD equipment in the distributed storage cluster is found through the ESN and WWN of the inserted physical hard disk to obtain OSD equipment information;
the OSD device is processed by having obtained the OSD device information.
7. The method for managing physical hard disks in a distributed storage system according to claim 1 or 2, wherein the obtaining of ESN and WWN of the inserted physical hard disk by a disk insertion event, finding the relevant OSD device in the distributed storage cluster by the ESN and WWN of the inserted physical hard disk, and obtaining the OSD device information comprises:
S236, acquiring ESN and WWN of the inserted hard disk through the disk inserting event;
s237, acquiring ESN and WWN of a hard disk existing in a Linux system;
s238, reading the configuration of the aggregation equipment in the hard disk of the system obtained in S236 and S237 after the hard disk is inserted, judging whether the configuration is matched through the ESN and WWN of the hard disk in the system obtained in S237 and the ESN and WWN of the hard disk in the system obtained in S236, if the configuration is matched, determining that the aggregation equipment exists, and turning to S240, otherwise, turning to 239;
s239, traversing all hard disks of the Linux system, if judging that the hard disks exist, turning to S237 to continuously acquire the hard disks, and otherwise, turning to S241;
s240, establishing hard disk aggregation equipment according to the ESN and WWN of the matched hard disks of the aggregation equipment, and updating information of physical volumes, logical volume groups and logical volumes of the LVM;
s241, reading the LVM configuration of the built aggregation equipment in the S240 or the LVM configuration of the physical hard disk, which is not matched with the aggregation equipment and is inserted into the Linux system, of the aggregation equipment in the S238-S239, loading the information of the physical volume, the logical volume group and the logical volume of the LVM, and updating the LVM log information;
s242 acquires log information of the LVM device loaded in S241;
s243, judging whether OSD information of the distributed storage cluster exists or not according to the log information of the loaded LVM equipment, and turning to S244 if the OSD information of the distributed storage cluster exists;
And S244, acquiring the OSD ID and the OSD cluster identifier of the OSD device from the LVM log information obtained in the S242.
8. The method for managing physical hard disks under a distributed storage system according to claim 1 or 2, wherein the processing OSD devices through the obtained OSD device information includes:
s245, mounting the OSD equipment through the obtained OSD ID and the OSD cluster identifier;
s246, starting the OSD device service, modifying the state of the OSD device in the distributed storage system to be the running state, adding the OSD device into the distributed storage cluster, and modifying the state of the OSD device to be the healthy state.
9. The management device of the physical hard disk under the distributed storage system is characterized by comprising an OSD equipment creating module and a plug-in disk event processing module, wherein:
the OSD device creating module is used for creating OSD through the distributed storage management platform and writing the positioning information such as OSD ID, OSD cluster identification and the like of the OSD devices in the distributed storage system and the logic information of the logical volume group and the physical volume group of the LVM into the physical hard disk through the log of the LVM;
the plug-in and plug-out event processing module is used for acquiring a plug-in and plug-out event or a plug-in event by monitoring a UDEV event of hardware equipment in the kernel, acquiring an ESN (electronic file network), a WWN (hard disk world Wide Web) and a disk identifier of a plug-out or plug-in physical hard disk, finding an OSD (on screen display) ID (identity) and an OSD cluster identifier of OSD (on screen display) equipment in a corresponding distributed storage system, and then modifying the running state of the OSD equipment in the distributed storage cluster through the OSD ID and the OSD cluster identifier of the OSD equipment.
10. The apparatus for managing physical hard disks under a distributed storage system according to claim 9, wherein the plug-in/plug-out event processing module comprises a plug-in/plug-out event processing submodule and a plug-in/plug-out event processing submodule, wherein:
the dial event processing submodule comprises a dial event monitoring subunit, a dial event OSD device searching subunit and a dial event OSD device processing subunit, wherein:
the disc pulling event monitoring subunit is used for monitoring disc pulling events in a Linux system by utilizing a Linux UDEV event interface;
the device comprises a pull event OSD device searching subunit, a pull event processing subunit and a pull event processing subunit, wherein the pull event OSD device searching subunit is used for acquiring the drive letter of the pulled hard disk through the pull event and finding the related OSD device in the distributed storage cluster through the hard disk drive letter of the pulled physical hard disk;
the device comprises a dial event OSD device processing subunit, a hard disk processing unit and a data processing unit, wherein the dial event OSD device processing subunit is used for processing and isolating OSD device information through the found OSD device corresponding to the pulled hard disk;
the plug-in event processing submodule comprises a plug-in event monitoring subunit, a plug-in event OSD device searching subunit and a plug-in event OSD device processing subunit, wherein:
the disc insertion event monitoring subunit is used for monitoring disc insertion events in the Linux system by using a Linux UDEV event interface;
The system comprises a disk insertion event OSD device searching subunit, a disk insertion event processing subunit and a distributed storage cluster searching subunit, wherein the disk insertion event OSD device searching subunit is used for obtaining ESN and WWN of an inserted physical hard disk through a disk insertion event, finding related OSD devices in the distributed storage cluster through the ESN and WWN of the inserted physical hard disk and obtaining OSD device information;
and the OSD device processing subunit of the plug-in event is used for processing the OSD device by the obtained OSD device information.
CN202010604671.4A 2020-06-29 2020-06-29 Method and device for managing physical hard disk in distributed storage system Active CN111857577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010604671.4A CN111857577B (en) 2020-06-29 2020-06-29 Method and device for managing physical hard disk in distributed storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010604671.4A CN111857577B (en) 2020-06-29 2020-06-29 Method and device for managing physical hard disk in distributed storage system

Publications (2)

Publication Number Publication Date
CN111857577A true CN111857577A (en) 2020-10-30
CN111857577B CN111857577B (en) 2022-04-26

Family

ID=72988708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010604671.4A Active CN111857577B (en) 2020-06-29 2020-06-29 Method and device for managing physical hard disk in distributed storage system

Country Status (1)

Country Link
CN (1) CN111857577B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031879A (en) * 2021-05-24 2021-06-25 广东睿江云计算股份有限公司 Cluster storage method based on LVM logic
WO2022257338A1 (en) * 2021-06-10 2022-12-15 苏州浪潮智能科技有限公司 Storage management method and system, storage medium and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529742B1 (en) * 2001-07-30 2009-05-05 Ods-Petrodata, Inc. Computer implemented system for managing and processing supply
CN102664923A (en) * 2012-03-30 2012-09-12 浪潮电子信息产业股份有限公司 Method for realizing shared storage pool by utilizing Linux global file system
CN108287669A (en) * 2018-01-26 2018-07-17 平安科技(深圳)有限公司 Date storage method, device and storage medium
CN108595119A (en) * 2018-03-30 2018-09-28 浙江大华技术股份有限公司 A kind of method of data synchronization and distributed system
US20190222648A1 (en) * 2017-04-14 2019-07-18 Huawei Technologies Co., Ltd. Data Processing Method, Storage System, and Switching Device
CN110399171A (en) * 2019-07-23 2019-11-01 苏州浪潮智能科技有限公司 A kind of hard disk management method, system and associated component
CN110502496A (en) * 2019-07-19 2019-11-26 苏州浪潮智能科技有限公司 A kind of distributed file system restorative procedure, system, terminal and storage medium
CN110515899A (en) * 2019-07-31 2019-11-29 济南浪潮数据技术有限公司 File location method and device
CN110569112A (en) * 2019-09-12 2019-12-13 华云超融合科技有限公司 Log data writing method and object storage daemon device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529742B1 (en) * 2001-07-30 2009-05-05 Ods-Petrodata, Inc. Computer implemented system for managing and processing supply
CN102664923A (en) * 2012-03-30 2012-09-12 浪潮电子信息产业股份有限公司 Method for realizing shared storage pool by utilizing Linux global file system
US20190222648A1 (en) * 2017-04-14 2019-07-18 Huawei Technologies Co., Ltd. Data Processing Method, Storage System, and Switching Device
CN108287669A (en) * 2018-01-26 2018-07-17 平安科技(深圳)有限公司 Date storage method, device and storage medium
CN108595119A (en) * 2018-03-30 2018-09-28 浙江大华技术股份有限公司 A kind of method of data synchronization and distributed system
CN110502496A (en) * 2019-07-19 2019-11-26 苏州浪潮智能科技有限公司 A kind of distributed file system restorative procedure, system, terminal and storage medium
CN110399171A (en) * 2019-07-23 2019-11-01 苏州浪潮智能科技有限公司 A kind of hard disk management method, system and associated component
CN110515899A (en) * 2019-07-31 2019-11-29 济南浪潮数据技术有限公司 File location method and device
CN110569112A (en) * 2019-09-12 2019-12-13 华云超融合科技有限公司 Log data writing method and object storage daemon device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031879A (en) * 2021-05-24 2021-06-25 广东睿江云计算股份有限公司 Cluster storage method based on LVM logic
WO2022257338A1 (en) * 2021-06-10 2022-12-15 苏州浪潮智能科技有限公司 Storage management method and system, storage medium and device
US11907591B1 (en) 2021-06-10 2024-02-20 Inspur Suzhou Intelligent Technology Co., Ltd. Method and system for storage management, storage medium and device

Also Published As

Publication number Publication date
CN111857577B (en) 2022-04-26

Similar Documents

Publication Publication Date Title
JP4837445B2 (en) Storage system and management apparatus and method
US8443231B2 (en) Updating a list of quorum disks
US8429369B2 (en) Storage management program, storage management method, and storage management apparatus
US8250033B1 (en) Replication of a data set using differential snapshots
US7133964B2 (en) Raid assimilation method and apparatus
US8510526B2 (en) Storage apparatus and snapshot control method of the same
US7007144B2 (en) Method, apparatus, and computer readable medium for managing back-up
US20180260123A1 (en) SEPARATION OF DATA STORAGE MANAGEMENT ON STORAGE devices FROM LOCAL CONNECTIONS OF STORAGE DEVICES
CN111356996B (en) System and computer-implemented method for version verification
US20170083535A1 (en) Managing sequential data store
CN111857577B (en) Method and device for managing physical hard disk in distributed storage system
KR20110050452A (en) Recovery of a computer that includes virtual disks
US11221785B2 (en) Managing replication state for deleted objects
US11977532B2 (en) Log record identification using aggregated log indexes
US10394491B2 (en) Efficient asynchronous mirror copy of thin-provisioned volumes
US8332497B1 (en) Generic resynchronization between persistent management store and dynamic configuration
US20120311227A1 (en) Information storage system, snapshot acquisition method, and data storage medium
US7565568B1 (en) Method and system for virtualization switch failover
JP2009129289A (en) Information processor, information processing method, and program
US7882086B1 (en) Method and system for portset data management
US20090024768A1 (en) Connection management program, connection management method and information processing apparatus
US8024519B2 (en) Catalog recovery through system management facilities reverse transversal
CN113946276A (en) Disk management method and device in cluster and server
US20120089776A1 (en) Systems and methods for raid metadata storage
US20170139787A1 (en) Method and system for tracking information transferred between storage systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant