CN116774907A - Method for mistakenly formatting control logic unit number and related equipment - Google Patents

Method for mistakenly formatting control logic unit number and related equipment Download PDF

Info

Publication number
CN116774907A
CN116774907A CN202210228587.6A CN202210228587A CN116774907A CN 116774907 A CN116774907 A CN 116774907A CN 202210228587 A CN202210228587 A CN 202210228587A CN 116774907 A CN116774907 A CN 116774907A
Authority
CN
China
Prior art keywords
unit number
storage node
data
address
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210228587.6A
Other languages
Chinese (zh)
Inventor
罗镇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huawei Technology Co Ltd
Original Assignee
Chengdu Huawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huawei Technology Co Ltd filed Critical Chengdu Huawei Technology Co Ltd
Priority to CN202210228587.6A priority Critical patent/CN116774907A/en
Publication of CN116774907A publication Critical patent/CN116774907A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a method for controlling a logic unit number to be incorrectly formatted and related equipment, wherein the method comprises the following steps: receiving a data writing operation on the logic unit number; the method write data operation for identifying the control logic unit number being incorrectly formatted is a formatting operation for the control logic unit number being incorrectly formatted; the method logic unit number in which the control logic unit number is incorrectly formatted is protected. By adopting the embodiment of the application, the writing operation of the logic unit number can be identified to be the formatting operation of the logic unit number, so that the logic unit number is protected, and the data on the logic unit number is prevented from being damaged due to the error formatting operation.

Description

Method for mistakenly formatting control logic unit number and related equipment
Technical Field
The present application relates to the field of storage technologies, and in particular, to a method for controlling a logic unit number to be reformatted and related devices.
Background
The storage device provides a logical unit number (logical unit number, LUN) for the compute node to store data. However, there may be a misfolded LUN on the storage device, for example, writing all zero data using some commands, resulting in the LUN storing data performing a formatting operation, and clearing all or part of the data on the LUN.
To prevent the data in the LUN from being covered or corrupted by misfolding, existing approaches typically employ a method of configuring periodic snapshots for protection. Periodic snapshots are snapshots that are automatically generated at certain intervals, and old snapshots are deleted by maintaining the total number of the snapshots. If the source LUN data is found to be damaged, the latest data before the data damage can be found in all the snapshots to be recovered.
Since the number and period of snapshots are limited, it cannot be guaranteed that there must be a snapshot before being reformatted after a problem is found, so that the data on the LUN cannot be protected at the first time.
Disclosure of Invention
The embodiment of the application provides a method for controlling a logic unit number to be incorrectly formatted and related equipment, which can identify that the writing operation of the logic unit number is the formatting operation of the logic unit number, thereby protecting the logic unit number and avoiding the data on the logic unit number from being destroyed due to the incorrect formatting operation.
In a first aspect, an embodiment of the present application provides a method for controlling a control logic unit number to be reformatted, including:
receiving a data writing operation on the logic unit number;
identifying that the write data operation is a formatting operation for the logical unit number;
And protecting the logic unit number.
The above-described methods may be applied to a storage node, performed by the storage node or a component (e.g., a chip, software module, or integrated circuit) internal to the storage node.
In an embodiment of the present application, the storage node may provide a logical unit number for storing data. The memory unit may receive a write data operation to the logical unit number, and may identify that the write data operation is a formatting operation to the logical unit number. In order to avoid a mis-formatting operation, the storage node may protect the logical unit from corruption of data stored in the logical unit number.
In a possible implementation manner of the first aspect, the identifying the write data operation is a formatting operation for the logical unit number includes: and under the condition that the write data operation is determined to be a write zero operation, identifying that the write data operation is a formatting operation on the logic unit number.
It can be seen that performing a zero write operation on a logical unit number means that it is possible to zero the data on the logical unit number, so that it is necessary to identify that the zero write operation is a formatting operation on the logical unit number, so that the data on the logical unit number is prevented from being incorrectly formatted due to the zero write operation.
In a possible implementation manner of the first aspect, the identifying the write data operation is a formatting operation for the logical unit number includes: and under the condition that the data writing operation is determined to be a non-first data writing operation, identifying that the non-first data writing operation is a formatting operation on the logic unit number.
It can be seen that the non-first write data operation indicates that the logical unit number has been written once, and that some operations may be performed on the data on the logical unit number when the logical unit number is written again, so that it is necessary to identify that the non-first write data operation is a formatting operation on the logical unit number, so that the data on the logical unit number is prevented from being destroyed by the non-first write data operation.
In a possible implementation manner of the first aspect, the identifying the write data operation is a formatting operation for the logical unit number includes: and determining that the first address carried by the data writing operation falls into a protection area address preset for the logic unit number.
It can be understood that the protection area address preset by the logic unit number indicates that the data of the address is protected, and when the first address carried by the write data operation falls into the address, it indicates that the write data operation may perform a reformatting on the data protected by the logic unit number. Thus, the error formatting operation on the logical unit number can be found in time, and the data on the logical unit number is protected by adopting related protection measures.
In a possible implementation manner of the first aspect, the protection area address includes one or more of the following: the default address of the system, the address set by the user and the address identified by the system.
It can be seen that the protection area address can be set in different ways to meet the requirements of different users.
In a possible implementation manner of the first aspect, the protecting the logical unit number includes: triggering the internal snapshot activation of the logical unit number; or disconnecting the remote equipment related to the double-writing characteristic service under the condition that the double-writing characteristic service exists in the logic unit number.
It can be seen that, for different services, the scheme can provide different protection measures, so that data can be protected at the first time, and the latest data before the data is incorrectly formatted is protected.
In a second aspect, an embodiment of the present application further provides a storage node, including:
the communication unit is used for receiving data writing operation on the logic unit number;
a processing unit configured to identify that the write data operation is a formatting operation for the logical unit number;
the processing unit is further configured to protect the logic unit number.
In a possible implementation manner of the second aspect, the processing unit is specifically configured to:
and under the condition that the write data operation is determined to be a write zero operation, identifying that the write data operation is a formatting operation on the logic unit number.
In a possible implementation manner of the second aspect, the processing unit is specifically configured to:
and under the condition that the data writing operation is determined to be a non-first data writing operation, identifying that the non-first data writing operation is a formatting operation on the logic unit number.
In a possible implementation manner of the second aspect, the processing unit is specifically configured to:
and determining that the first address carried by the data writing operation falls into a protection area address preset for the logic unit number.
In a possible implementation manner of the second aspect, the protection area address includes one or more of the following: the default address of the system, the address set by the user and the address identified by the system.
In a possible implementation manner of the second aspect, the processing unit is specifically configured to:
triggering the internal snapshot activation of the logical unit number; or alternatively, the process may be performed,
and disconnecting the remote equipment related to the double-writing characteristic service under the condition that the double-writing characteristic service exists in the logic unit number.
In a third aspect, embodiments of the present application provide a storage node comprising at least one processor and at least one memory;
the memory stores a computer program;
the storage node performs the method described in the first aspect above when the processor executes the computer program.
The storage node described in the third aspect may include a processor (referred to as a special purpose processor for convenience) dedicated to performing the methods, or may include a processor that executes the methods by calling a computer program, such as a general purpose processor. In the alternative, the at least one processor may also include both special purpose and general purpose processors.
Alternatively, the above-mentioned computer program may be stored in a memory. The Memory may be a non-transitory (non-transitory) Memory, such as a Read Only Memory (ROM), which may be integrated on the same device as the processor, or may be separately provided on different devices, and the type of the Memory and the manner in which the Memory and the processor are provided are not limited in the embodiments of the present application.
In a possible embodiment, the at least one memory is located outside the storage node.
In yet another possible embodiment, the at least one memory is located within the storage node.
In yet another possible embodiment, a portion of the at least one memory is located within the storage node and another portion of the at least one memory is located outside the storage node.
In the present application, the processor and the memory may also be integrated in one device, i.e. the processor and the memory may also be integrated.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having instructions stored therein which, when executed on at least one processor, implement the method described in the first aspect.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on at least one processor, implement the method described in the first aspect. The computer program product may be a software installation package, which may be downloaded and executed on a storage node in case the aforementioned method is required.
The advantages of the technical methods provided in the second to fifth aspects of the present application may refer to the advantages of the technical solutions of the first aspect, and are not described herein.
Drawings
The drawings used in the embodiments of the present application are described below.
FIG. 1 is a schematic diagram of a scenario in which data is protected from being reformatted by snapshot LUNs according to an embodiment of the application;
FIG. 2A is a schematic diagram of a storage system according to an embodiment of the present application;
FIG. 2B is a schematic diagram illustrating an internal structure of a storage node according to an embodiment of the present application;
fig. 2C is a schematic structural diagram of a control unit according to an embodiment of the present application;
FIG. 3A is a schematic diagram of a memory pool according to an embodiment of the present application;
FIG. 3B is a schematic diagram of a memory of each level included in a memory pool according to an embodiment of the present application;
FIG. 4A is a schematic diagram of a memory pool including a portion of a type of memory according to an embodiment of the present application;
FIG. 4B is a schematic diagram of another network architecture of a memory pool according to an embodiment of the present application;
FIG. 4C is a schematic diagram of another network architecture of a storage pool according to an embodiment of the present application;
FIG. 5 is a flow chart of a method for controlling a logic unit number to be reformatted according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of setting a protection area address according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of a protection strategy for logical unit numbers according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a storage node according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
For ease of understanding, the following description of some of the concepts related to the embodiments of the application are given by way of example for reference. The following is described:
1. logical unit number (logical unit number, LUN)
It will be appreciated that the number of devices that can be attached to the small computer system interface (small computer system interface, SCSI) bus is limited, typically 8 or 16, and that these devices can be described by a target (target) identification number (Identity document, ID). Each device is assigned a unique data in the SCSI program, wherein the 8-bit (narrow) band is any number between 0 and 7, and the 16-bit (wide) band is any number between 8 and 16 bits. Whereas the object actually required for description is far beyond the number. The concept of a LUN is then introduced, that is, the LUN functions to extend the target ID. There may be multiple LUN devices (devices) at each target, typically abbreviated as LUNs.
Wherein the LUN ID is not equal to a certain device, but is just a number, and does not represent any entity attribute. LUNs are not what are visible entities, but rather virtual objects, many times. For example, an array cabinet, the compute node is considered a target device. When needed for certain special needs, the disk space of the disk array cabinet needs to be divided into a plurality of small units for the computing nodes to use. Therefore, these smaller disk resources are referred to as LUN0, LUN1, LUN2, etc., and the minimum storage object level identified by the operating system is LUN device.
That is, a LUN is a partition of a storage unit, and may specifically include a number, a symbol, a character string, and the like.
2. Misformatting
A formatting operation is performed on a LUN with data stored therein, which clears the data on the LUN in whole or in part. The source of the reformatting may be intentional or unintentional command operations, behavior of the virus software, etc.
In order to facilitate understanding of the embodiments of the present application, the following first analyzes and proposes a technical problem to be solved by the present application.
In general, in order to prevent the data stored on the LUN from being covered and destroyed by the reformatting, a certain protection measure is usually adopted to prevent or stop the data from being destroyed. Referring to fig. 1, fig. 1 is a schematic diagram of a scenario in which protection data is reformatted by means of a snapshot LUN according to an embodiment of the present application. For the problem of a mis-formatted LUN, fig. 1 is protected by configuring periodic snapshots. Periodic snapshots are snapshots which are automatically generated at certain intervals, and old snapshots are deleted by maintaining the total number of the snapshots. If the data in the source LUN is found to be damaged, the latest data before the data damage can be found in all the snapshots for recovery.
However, due to the limited number of snapshots and the limited period, it cannot be guaranteed that there must be a snapshot before being reformatted after a problem is found. Referring to FIG. 1, when the reformatting occurs at time T3, the data on the source LUN is destroyed, and the data at time T1 is on period block 1, and the data at time T3 is on period snapshot 2. That is, since there is no snapshot of the period at the time T2 due to the problem of the interval, the data at the time T2 before being reformatted cannot be retrieved.
In addition, for dual write traffic, such as synchronous remote copy or dual active traffic, the reformatting of the LUN may result in the destruction of the remote data as well. For example, in the case where one end of the dual write service is reformatted, the data at the other end is destroyed.
Based on the above description, the technical problems to be solved by the present application may include the following:
1. the problem that the prior art cannot protect data at the first time is solved.
2. The problem that the prior art lacks protection for double-writing service is solved.
In order to solve the above technical problems, first, the present application provides a system. Referring to fig. 2A, fig. 2A is a schematic diagram of a storage system according to an embodiment of the application. As can be seen from fig. 2A, the storage system comprises a cluster of computing nodes, a cluster of storage nodes and a user device 300.
The computing node cluster includes one or more computing nodes 100 (two computing nodes 100 are shown in fig. 2A, but are not limited to two computing nodes 100), and the computing nodes 100 may communicate with each other. Computing node 100 is a computing device on the user side, such as a server, desktop computer, or the like.
At the hardware level, a processor and memory (not shown in fig. 2A) are provided in the computing node 100. At the software level, an application (application 101) and a client (client 102) program (client 102) run on the computing node 100. The application 101 is a generic term for various application programs presented by a user. The client 102 is configured to receive input/output (IO) requests triggered by an application. And interacts with storage node 20 to send IO requests to storage node 20. Client 102 is also operative to receive data from storage node 20 and forward the data to application 101. The client 102 may also be implemented by hardware components located within the computing node 100. It will be appreciated that when the client 102 is a software program, the functionality of the client 102 is implemented by a processor comprised by the computing node 100 running the program.
Illustratively, the computing node 100 may be connected to a network (e.g., a network switch in the network) through an initiator, and the storage node 20 may also be connected to a network (e.g., a network switch in the network). A common connection networking manner is Fiber Channel (FC) networking, which is based on an internet small computer system interface (internet small computer system interface, ISCSI), and the protocol is not limited herein. In each networking mode, there is a globally unique identifier (world wide number, WWN) of the initiator on any one of the computing nodes 100.
Thus, any one of the computing nodes 100 may access any one of the storage nodes 20 in a storage node cluster that includes a plurality of storage nodes 20 (three storage nodes 20 are shown in FIG. 2A, but are not limited to three storage nodes 20), and each storage node 20 may be interconnected. Storage nodes 20 such as storage servers, controllers for desktop computers or storage arrays, hardware blocks, etc. Functionally, the storage node 20 is mainly used for computing or processing data, etc. In addition, the storage node cluster also includes a management node (not shown in FIG. 2A). The management node is used for creating and managing the memory pool. Each storage node 20 elects one of the storage nodes to assume the role of a management node. The management node may communicate with any one of the storage nodes 201.
In hardware, as shown in fig. 2A, storage node 20 includes at least a processor, a memory, and a control unit 201. The processor 202 is a central processing unit (central processing unit, CPU) for processing data from outside the storage node 20 or data generated inside the storage node 20. The memory refers to a device for storing data, and may be a memory or a hard disk. The memory refers to an internal memory for directly exchanging data with the processor, which can read and write data at any time and has high speed, and is used as a temporary data memory of an operating system or other running programs. The Memory includes at least two types of memories, for example, the Memory may be a random access Memory (ram) or a Read Only Memory (ROM). For example, the random access memory may be a dynamic random access memory (Dynamic Random Access Memory, DRAM) or a storage class memory (Storage Class Memory, SCM). DRAM is a semiconductor memory, which, like most random access memories (Random Access Memory, RAM), is a volatile memory (volatile memory) device. SCM is a composite storage technology combining both traditional storage devices and memory characteristics, and storage class memories can provide faster read and write speeds than hard disks, but access speeds slower than DRAM, and are cheaper in cost than DRAM.
However, the DRAM and SCM are only exemplary in this embodiment, and the memory may also include other random access memories, such as static random access memories (Static Random Access Memory, SRAM), and the like. For read-only memory, for example, it may be a programmable read-only memory (Programmable Read Only Memory, PROM), erasable programmable read-only memory (Erasable Programmable Read Only Memory, EPROM), etc. Alternatively, the memory may be a Dual In-line memory module or a Dual line memory module (Dual In-line Memory Module, DIMM for short), i.e., a module composed of Dynamic Random Access Memory (DRAM). In the following description, DRAM and SCM are each illustrated as examples, but do not represent that storage node 20 does not contain other types of memory.
The memory in this embodiment may also be a hard disk. Unlike the memory 203, the hard disk is slower in reading and writing data than the memory, and is generally used to store data permanently. Taking storage node 20a as an example, one or more hard disks may be disposed therein; alternatively, a hard disk frame (as shown in fig. 2B) may be mounted outside the storage node 20a, and a plurality of hard disks may be provided in the hard disk frame. In either deployment, these hard disks may be considered as the hard disks contained in storage node 20 a. The hard disk type is a solid state hard disk, a mechanical hard disk, or other types of hard disks. Similarly, other storage nodes in the storage node cluster, such as storage node 20b, storage node 20c, may also contain various types of hard disks. One storage node 20 may contain one or more memories of the same type.
The hard disk included in the memory pool in this embodiment may also have a memory interface, and the processor may directly access the hard disk.
In one possible implementation, the user device 300 is an electronic device having data processing, data transceiving capabilities. For example, the device may include a handheld terminal, a wearable device, a vehicle robot, or the like, or may be a component (e.g., a chip or an integrated circuit) included in the device. For example, when the user device is a handheld terminal, it may be a mobile phone (mobile phone), a tablet (pad), a computer (such as a notebook, a palm computer, etc.), and so on. It is understood that the user device 300 may be an administrator operated device. Alternatively, the administrator may send the address range to the storage node 20 through the user device 300, and the storage node 20 may set the address range to the protection area address of the logical unit number LUN. The address range is a protection area address of the logical unit number, and the address range includes a start logical block address and an end logical block address.
It should be noted that in one possible implementation, the computing node 100 and the user equipment 300 may be integrated into one device.
In one possible implementation, computing node 100 may issue an Input/Output (I/O) request to storage node 20 via a initiator, and storage node 20 may identify that the I/O is from a particular computing node 100 via an initiator WWN carried by the received I/O.
In the storage system of the present embodiment, when the computing node 100 receives the write data operations sent by the user, the computing node 100 may send the data in these write data operations to the storage node 20. Thus, storage node 20 may receive a write data operation from a compute node to a logical unit number. Then, in the case where the write data operation is a write zero operation or a non-first write data operation, the storage node 20 may recognize that the above write data operation is a formatting operation for a logical unit number. Further, the storage node 20 may identify that the write data operation is a formatting operation for the logical unit number by determining that the first address carried by the write data operation falls within a protection area address preset for the logical unit number. Thus, storage node 20 enforces a protection policy on the logical unit number. For example, an alarm is triggered while internal data protection (which may be a snapshot or other form) is activated to actively disconnect the communication link for dual write feature traffic (say dual active or synchronous remote copy).
Referring to fig. 2B, fig. 2B is a schematic diagram illustrating an internal structure of the storage node 20. In practical applications, storage node 20 may be a server or a storage array. As shown in fig. 2B, the storage node 20 includes a control unit in addition to a processor and a memory. Since the access latency of the memory is low, the overhead of the operating system scheduling and the software itself may become a bottleneck for data processing. In order to reduce software overhead, the embodiment introduces a hardware component control unit for hardware the access of IO and reducing the influence of CPU scheduling and a software stack. First, the storage node 20 has its own control unit 22 for communicating with the computing node 100 and also for communicating with other storage nodes. Specifically, storage node 20 may receive requests from computing node 100 via control unit 22, or may send requests to computing node 100 via control unit 22, or storage node 20 may send requests to storage node 20 via control unit 22, or may receive requests from storage node 20 via control unit 22. Second, the various memories within storage node 20 may communicate with each other through control unit 22, and may also communicate with computing node 100 through control unit 22. Finally, if the individual hard disks contained in storage node 20 are located within storage node 20, these hard disks may communicate with each other via control unit 22 or with computing node 100 via control unit 22. If the hard disks are located in a hard disk frame externally connected to the storage node 20, a control unit 24 is disposed in the hard disk frame, the control unit 24 is used for communicating with the control unit 22, and the hard disk may send data or instructions to the control unit 22 through the control unit 24, or may receive data or instructions sent by the control unit 22 through the control unit 24. In addition, storage node 20 may also include a bus (not shown in FIG. 2B) for communication among the components within storage node 20.
Referring to fig. 2C, fig. 2C is a schematic structural diagram of a control unit, taking the control unit 22 as an example, which includes a communication unit 220 and a computing unit 221, wherein the communication unit provides efficient network transmission capability for external or internal communication, and takes a network interface controller (network interface controller, NIC) as an example. The calculation unit 221 is a programmable electronic component for performing calculation processing and the like on data, and the present embodiment is described taking the data processing unit (data processingunit, DPU) as an example. The DPU has the versatility and programmability of the CPU, but more specialized, and can operate efficiently on network packets, storage requests, or analysis requests. The DPU is distinguished from the CPU by a large degree of parallelism (requiring handling of a large number of requests). Alternatively, the DPU may be replaced by a graphics processing unit (graphics processingunit, GPU), an embedded neural Network Processor (NPU), or the like. The DPU is used to provide data offload services for a memory pool, such as address indexing or address querying functions, partitioning functions, and filtering, scanning, etc. operations on data. After the IO request passes through the NIC to the storage node 20, the IO request is directly processed on the computing unit 221, so that a CPU and an operating system in the storage node 20 are bypassed, a software stack is thinned, and the influence of CPU scheduling is reduced. Taking the example of executing the read IO request, after the NIC receives the read IO request sent by the computing node 100, the DPU may directly query the index table for the information corresponding to the read IO request. The control unit 22 further includes a DRAM222, the DRAM222 being physically identical to the DRAM described in fig. 2B, except that here the DRAM222 is a memory of the control unit 22 itself for temporarily storing data or instructions passing through the control unit 22, which does not form part of the memory pool. In addition, control unit 22 may also map DRAM222 to compute node 100 such that the space of DRAM222 is visible to compute node 100, thereby translating IO accesses into memory semantic based accesses. The control unit 24 is similar in structure and function to the control unit 22 and will not be described in detail.
Next, the memory pool provided in this embodiment is described, and fig. 3A is a schematic diagram of the architecture of the memory pool. The memory pool includes a plurality of different types of memory, each of which may be considered a hierarchy. The performance of each level of memory is different from the performance of the other levels of memory. The performance of the memory in the application is mainly considered in terms of operation speed and/or access time delay. Fig. 3B is a schematic diagram of a memory of each level included in the memory pool according to the present embodiment. As shown in fig. 3B, the memory pool is made up of memory in each storage node 20, where the DRAM in each storage node is at the first level of the memory pool, because DRAM performs the most among the various types of memory. The SCM's performance is lower than DRAM, so the SCM in each storage node is at the second level of the memory pool. Further, the performance of the hard disk is lower than that of the SCM, so the hard disk in each storage node is located at the third level of the memory pool. Although only three types of memory are shown in fig. 3A and 3B, as described above, a variety of different types of memory may be deployed within storage node 20 in product practice, i.e., various types of memory or hard disks may be part of a memory pool, and the same types of memory located on different storage nodes belong to the same hierarchy in the memory pool. The application does not limit the type of memory contained in the memory pool, nor the number of levels. The hierarchy of the memory pool is merely an internal division that is not perceived by upper applications. It should be noted that although the same memory on each storage node is at the same level, for a certain storage node, the performance of the DRAM using it locally is higher than that of the DRAM using other storage nodes, and similarly, the performance of the SCM using it locally is higher than that of the SCM using other storage nodes, and so on. Therefore, when a certain level of memory space needs to be allocated to the storage node, the storage node preferentially allocates the space locally belonging to the level to the user, and when the local space is insufficient, the storage node allocates the space from the same level of other storage nodes. In addition, the memory space may be allocated according to policies such as load balancing or capacity balancing.
In the memory pool shown in fig. 3A or 3B, all types of memory in the storage node are included, however in other embodiments, as shown in fig. 4A or 4B, the memory pool may include only some types of memory, e.g., only higher performance memory, e.g., DRAM and SCM, while excluding relatively lower performance memory such as hard disk.
As shown in fig. 4B, in the network architecture of another memory pool provided in this embodiment, a storage node and a computing node are integrated in the same physical device, and in this embodiment, the integrated devices are collectively referred to as a storage node. The application is deployed inside a storage node 20 so that the application can trigger a write data operation or read data request directly through a client in the storage node 20, be processed by that storage node 20, or be sent to other storage nodes 20 for processing. At this time, the data reading and writing operation sent by the client to the local storage node 20 specifically means that the client sends a data access request to the processor. Except for this, the components and functions included in the storage node 20 are similar to those of the storage node 20 in fig. 4A, and will not be described again here. Similar to the memory pool shown in any of fig. 3A to 4A, the memory pool in the network architecture may include all types of memory in the storage node, or may include only some types of memory, for example, only higher performance memories such as DRAM and SCM, while excluding relatively lower performance memories such as hard disks (as shown in fig. 4B).
In addition, in the memory pools shown in fig. 3A-4B, not every storage node in a cluster of storage nodes must contribute storage space for the memory pool, which may only cover a portion of the storage nodes in the cluster. In some application scenarios, two or more memory pools may also be created in the storage node cluster, each memory pool covering multiple storage nodes, where the storage nodes provide storage space for the memory pool. The storage nodes occupied by the different memory pools may or may not be duplicated. In summary, the memory pool in this embodiment is built in at least two storage nodes, and the storage space contained therein is derived from at least two different types of memories.
When the memory pool contains only higher performance memory (e.g., DRAM and SCM) in the storage cluster, the management node may also construct the lower performance memory (e.g., hard disk) in the storage cluster as a memory pool. Fig. 4C illustrates a storage pool using the network architecture shown in fig. 4A as an example, and similarly to the memory pool, the storage pool shown in fig. 4C spans at least two storage nodes, and the storage space of the storage pool is formed by one or more types of hard disks in the at least two storage nodes. When the storage cluster includes both a memory pool and a storage pool, the storage pool is used to persistently store data, particularly data with a lower access frequency, and the memory pool is used to temporarily store data, particularly data with a higher access frequency. Specifically, when the amount of data stored in the memory pool reaches a set threshold, a portion of the data in the memory pool is written into the memory pool for storage. It will be appreciated that the storage pool may also be built into the network architecture shown in fig. 4B, with similar principles as described above.
With respect to the creation of memory pools. Each storage node 20 periodically reports the status information of the memory to the management node via the heartbeat channel. The management node can be one or a plurality of management nodes. It may be deployed as an independent node in a storage node cluster or in conjunction with storage node 20. In other words, the role of the management node is assumed by one or more storage nodes 20. Status information of the memory includes, but is not limited to: the type of various memories contained by the storage node, health status, total capacity of each memory, and available capacity, etc. The management node creates a memory pool according to the collected information, and the creation means that the storage spaces provided by the storage nodes 20 are collected and collectively managed as the memory pool, so that the physical space of the memory pool is derived from various memories contained in the storage nodes. However, in some scenarios, storage node 20 may selectively provide memory to the memory pool based on its own conditions, such as the health of the memory. In other words, it is possible that some memory in some storage nodes is not part of the memory pool.
After the information is collected, the management node needs to address the storage space in the memory pool uniformly. Each segment of space of the memory pool has a unique global address through unified addressing. The space indicated by a so-called global address is unique in the memory pool and each storage node 20 knows the meaning of the address. After a segment of memory pool is allocated a physical space, the global address of that space has its corresponding physical address indicating which memory of which storage node the space represented by the global address is actually located on, and the offset in that memory, i.e., the location of the physical space. Each space is referred to herein as a "page," as will be described in more detail below. In practical applications, in order to ensure reliability of data, an Erasure Coding (EC) check mechanism or a multiple copy mechanism is often adopted to implement data redundancy. The EC check mechanism is to divide data into at least two data fragments, calculate the check fragments of the at least two data fragments according to a certain check algorithm, and when one data fragment is lost, the other data fragment and the check fragment can be utilized to recover the data. Then, for the data, its global address is a set of multiple fine-grained global addresses, each fine-grained global address corresponding to a physical address of a data slice/parity slice. A multiple copy mechanism refers to storing at least two identical copies of data, and the at least two copies of data are stored in two different physical addresses. When one of the data copies is lost, other data copies may be used for recovery. Thus, for the data, its global address is also a set of more fine-grained global addresses, each fine-grained global address corresponding to the physical address of one copy of the data.
The management node may allocate a physical space for each global address after creating the memory pool, or may allocate a physical space for a global address corresponding to a write data operation when receiving the write data operation. The correspondence between each global address and its physical address is recorded in an index table, which the management node synchronizes to each storage node 20. Each storage node 20 stores the index table so as to query the physical address corresponding to the global address according to the index table when the data is read and written later.
In some application scenarios, the memory pool does not directly expose its storage space to the computing node 100, but instead virtualizes the storage space into Logical Units (LU) for use by the computing node 100. Each logical unit has a unique logical unit number (logical unit number, LUN). Since the compute node 100 is directly aware of the logical unit number, those skilled in the art typically refer to logical units directly with LUNs. Each LUN has a LUN ID that identifies the LUN. At this time, the memory pool provides storage space for the LUN with a granularity of pages, in other words, when the storage node 20 applies for space to the memory pool, the memory pool allocates space for one page or an integer multiple of a page. The size of one page may be 4KB, 8KB, or the like, and the present application is not limited to the size of a page. The specific location where data is located within a LUN may be determined by the starting address and the length (length) of the data. For the start address, those skilled in the art are generally referred to as logical block addresses (logical block address, LBA). It will be appreciated that three factors, LUN ID, LBA and length, identify a certain address field, which can be indexed to a global address. To ensure that data is stored uniformly in the respective storage nodes 20, the computing nodes 100 are typically routed using a distributed hash table (Distributed Hash Table, DHT) scheme, in which the hash ring is divided uniformly into portions, each of which is referred to as a partition, one corresponding to each of the address segments described above. Data access requests sent by the computing node 100 to the storage node 20 are located on an address segment, for example, by reading data from or writing data to the address segment.
In the application scenario described above, communication between the compute node 100 and the storage node 20 is performed using LUN semantics. In another application scenario, memory semantics are utilized for communication between computing node 100 and storage node 20. At this time, the control unit 22 maps the space of its DRAM to the computing node 100 so that the computing node 100 can perceive the space of the DRAM (which is referred to as a virtual space in this embodiment) and access the virtual space. In this scenario, the read/write data operations sent by compute node 100 to storage node 20 no longer carry LUN IDs, LBAs, and lengths, but rather other logical addresses, such as virtual space IDs, start addresses of virtual spaces, and lengths. In another application scenario, the control unit 22 may map the space in the memory pool managed by it to the computing node 100, so that the computing node 100 may perceive the portion of space and obtain the global address corresponding to the portion of space. For example, control unit 22 in storage node 20a is used to manage storage space in the memory pool provided by storage node 20a, control unit 22 in storage node 20b is used to manage storage space in the memory pool provided by storage node 20b, control unit 22 in storage node 20c is used to manage storage space in the memory pool provided by storage node 20c, and so on. The entire memory pool is thus visible to the computing node 100, and the computing node 100 may then directly designate the global address of the data to be written when sending the data to the storage node.
The following describes a space allocation procedure by taking an application applying a memory space to a memory pool as an example. In this case, the application refers to an internal service of the storage node, for example, the storage node 20a internally generates a memory application instruction, where the memory application instruction includes the size of the applied space and the type of memory. For ease of understanding, it is assumed here that the space of application is 16KB and that the memory type is SCM. In short, the size of the space of the application is determined by the size of the data stored, and the type of memory of the application is determined by the hot and cold information of the data. Storage node 20a retrieves a free global address from the stored index table, e.g., address interval [000001-000004], where the space of address 000001 is a page. By free global address, it is meant that the global address is not already occupied by any data. Then, the storage node 20a queries whether the local SCM has a free space of 16KB, if so, allocates space from the local to the global address, and if not, continues to query whether the SCM of the other storage node 20 has a free space of 16KB, which can be achieved by sending a query instruction to the other storage node 20. Since the other storage node 20 is far from the storage node 20a, in order to reduce the latency, the storage node 20a can query the storage node 20 close in priority in the case that the local storage node cannot support the free space allocated with 16 KB. After obtaining the physical address, the storage node 20a records the correspondence between the global address and the physical address in an index table, and synchronizes the correspondence to other storage nodes. After determining the physical address, storage node 20a may store data using the space corresponding to the physical address. Alternatively, the application refers to the application 101 in the computing node 100, in which case the memory application instruction is generated by the computing node 100 and then sent to the storage node 20 a. Then, the user may specify the size of the space of the application and the type of memory through the computing node 100.
The above-mentioned index table is used for recording the corresponding relation between the global address and the partition ID and the corresponding relation between the global address and the physical address, and can be used for recording the attribute information of the data. For example, the global address 000001 is the cold and hot information of the data or the data resident policy. Migration of data between various memories or setting of attributes or the like can be subsequently achieved based on these attribute information. It should be understood that the attribute information of the data is only one option of the index table and is not necessarily recorded.
When a new storage node joins the storage node cluster, the management node collects node update information, brings the new storage node into the memory pool, addresses the storage space contained in the storage node, thereby generating a new global address, and refreshes the corresponding relation between the partition and the global address (because the total number of the partition is unchanged regardless of capacity expansion or capacity contraction). The capacity expansion is also suitable for the situation that some storage nodes are additionally provided with memories or hard disks, the management node periodically collects the state information of the memories contained in each storage node, if a new memory is added, the new memory is brought into a memory pool, the new memory space is addressed, and therefore a new global address is generated, and then the corresponding relation between the partition and the global address is refreshed. Similarly, the memory pool provided in this embodiment also supports capacity reduction, so long as the corresponding relationship between the global address and the partition is updated.
Further optionally, the storage node 20 may also include a communication interface (not illustrated in fig. 2A-4C). Still further optionally, a bus (not shown in fig. 2A-4C) may also be included. Wherein the processor, the communication interface and the memory are connected by a bus.
At least one processor in storage node 20 is configured to perform the method of control logic unit number being reformatted.
In one possible implementation, at least one processor in the storage node 20 is configured to execute call computer instructions to:
receiving data writing operation on the logic unit number through a communication interface;
identifying that the write data operation is a formatting operation for the logical unit number;
and protecting the logic unit number.
In yet another possible implementation, the processor in the storage node 20 is specifically configured to:
and under the condition that the write data operation is determined to be a write zero operation, identifying that the write data operation is a formatting operation on the logic unit number.
In yet another possible implementation, the processor in the storage node 20 is specifically configured to:
and under the condition that the data writing operation is determined to be a non-first data writing operation, identifying that the non-first data writing operation is a formatting operation on the logic unit number.
In yet another possible implementation, the processor in the storage node 20 is specifically configured to:
and determining that the first address carried by the data writing operation falls into a protection area address preset for the logic unit number.
In yet another possible embodiment, the protection zone address comprises one or more of the following: the default address of the system, the address set by the user and the address identified by the system.
In yet another possible implementation, the processor in the storage node 20 is specifically configured to:
triggering the internal snapshot activation of the logical unit number; or alternatively, the process may be performed,
and disconnecting the remote equipment related to the double-writing characteristic service under the condition that the double-writing characteristic service exists in the logic unit number.
Each memory in the memory pool provided in this embodiment provides a memory interface for the processor, so that the processor sees a continuous space, and can directly perform data reading and writing operations on the memory in the memory pool.
In the storage system of the embodiment, the memory pool is created based on the memories with various performances, and the memories with various performances are located on different storage nodes, so that the memory pool of the cross-node memory integrated with the memories with different performances is realized, and various types of memories (whether the memories or the hard disks) can serve as storage resources to provide storage services for an upper layer, so that the performance advantage of the memory system is better exerted. Because the memory pool contains memories with different performances, the data can be controlled to migrate between the memories with different performances based on the access frequency, so that the data can be migrated to the high-performance memory when the access frequency of the data is higher to improve the data reading efficiency, and the data can be migrated to the low-performance memory when the access frequency of the data is lower to save the storage space of the high-performance memory. In addition, the memory pool in the present application provides storage space for compute nodes or LUNs, which alters the architecture of memory resources centered on the processor.
Referring to fig. 5, fig. 5 is a flowchart of a method for controlling a control logic unit number to be reformatted according to an embodiment of the present application, which may be implemented based on the systems shown in fig. 2A to 4C, and includes at least the following steps:
in step S501, a write data operation for a logical unit number is received.
Specifically, the compute node sends a write data operation to the logical unit number to the storage node. Accordingly, the storage node receives a write data operation from the compute node to the logical unit number.
The write data operation is used for writing the carried data into the first address of the logic unit number LUN of the storage node. In the application scenario of LUN semantics, the logical address of the data includes LUN ID, LBA and length. In the application scenario of the memory semantics, the logical address includes a virtual address ID, a space-seeking start address, and length.
In step S502, it is identified that the write data operation is a formatting operation for a logical unit number.
In one possible implementation, the write data operation includes a write zero operation, i.e., the data carried by the write data operation is all zero data. The storage node identifies that the write data operation is a formatting operation for the logical unit number in the event that it is determined that the write data operation for the logical unit number is a write zero operation.
In one possible implementation, if the storage node receives multiple data writing operations from the computing node to the logical unit number within a preset period of time, it is indicated that the data writing operation is received again within the preset period of time after the storage node receives the data writing operation to the logical unit number for the first time. And receiving the write data operation again means that the previous data will be overwritten, which may cause the data on the LUN to be reformatted. Thus, in the above case, the storage node may prevent data on the LUN from being reformatted in a read-only manner. That is, the storage node identifies that the non-first write data operation is a formatting operation for the logical unit number if it determines that the write data operation for the logical unit number is a non-first write data operation.
In another possible implementation manner, the storage node determines that the first address carried by the write data operation on the logical unit number falls into a protection area address preset on the logical unit number, which indicates that the storage node identifies the write data operation as a formatting operation on the logical unit number.
For setting the address of the protection area, please refer to fig. 6, fig. 6 is a flowchart illustrating a process of setting the address of the protection area according to an embodiment of the present application. As can be seen from fig. 6, the embodiment of the present application provides three ways of setting the protection area address:
Mode one: the default address of the system, namely the address of the protection area is preset in the storage node. The protection area address contains the logical block address of the logical unit number, say 0-100MB. Under the condition that an administrator does not set the protection area address of the storage node through the user equipment, the storage node adopts the default protection area address of the system.
Mode two: the user sets (say, the administrator) the address, and the user can manually set the protection area address. That is, the administrator may send the protection zone address manually entered by the administrator to the storage node via the user device before the storage node receives a write data operation from the computing node for the logical unit number. Accordingly, the storage node receives an address range from the user device, and the storage node sets the address range as a protection area address of the logical unit number. The address range includes a start logical block address to an end logical block address.
In a third mode, the address identified by the system automatically identifies the address of the protection area by setting the storage node through the user equipment. That is, after the logical unit number of the storage node is written with data, the storage node may acquire the data written on the logical unit number LUN, and the data written on the logical unit number LUN is identified by the background of the storage node. The storage node may then set the protection area address of the logical unit number LUN according to the written data. Further, the storage node can match the metadata partition information of the common database and the virtual machine through the written data, and the logical block address obtained after matching is set as the protection area address of the logical unit number LUN.
Further, the storage node may save the protection area address into a logical unit number LUN object.
It can be appreciated that if the administrator does not set the protection zone address of the storage node through the user device, the storage node adopts the default protection zone address of the system. If the administrator sets the protection area address of the storage node through the user equipment, the protection area address which is set up most recently is taken as the main address. For example, at time T1, the administrator sets the protection area address of the storage node in mode two. However, at time T2, the administrator sets the protection zone address of the storage node by way three, and at time T2, the protection zone address of way three will overwrite the protection zone address of way two. Wherein T2 is greater than T1.
In one possible implementation, in the case where the storage node determines that the write data operation is a write zero operation, it is indicated that the write zero operation carries all-zero data, and the data on the LUN may be incorrectly formatted. Therefore, the storage node needs to determine whether there is an overlap between the data range of all-zero data carried by the write-zero operation and the protection area address on the LUN. If the storage node determines that the data range and the protection area address have an overlapping range, which indicates that the protected data area on the LUN will be formatted, step S503 is executed, and if the storage node determines that the data range and the protection area address do not have an overlapping range, the storage node continues to issue according to the write zero operation.
In step S503, the logical unit number is protected.
Specifically, in the case that the storage node determines that the first address carried by the write operation falls into the protection area address preset for the logical unit, it is indicated that the data in the LUN may be reformatted. Therefore, the storage node needs to execute a protection policy for the logical unit number LUN.
In one possible implementation, the storage node needs to trigger an internal protection snapshot activation of the logical unit number to backup data that may be incorrectly formatted in advance. Or, in the case that the logical unit number has the dual write feature service, the port is a remote device associated with the dual write feature service. Therefore, under the condition that one end device of the double-writing service can be protected from being formatted by mistake, the data of the other end device cannot be destroyed.
After the storage node determines that there is an overlapping range between the data range and the protection zone address, the storage node may also generate alert information including an alert time and an initiator port identification of the computing node before executing the protection policy on the logical unit number. It will be appreciated that the alert identification is used to instruct a user (say an administrator) to perform alert processing.
Illustratively, the alarm processes include an intended alarm process and an unintended alarm process. The administrator may determine the corresponding computing node based on the initiator port identification of the computing node contained in the alert information, and determine whether the behavior of the computing node is predictable or unpredictable. Where the expected alert indicates that the computing node is expecting normal zero write data to the storage node, such as not preserving source LUN data, the LUN needs to be formatted. Regarding the intended alarm handling includes one or more of the following operations: the administrator sends an instruction to delete the alert information to the storage node, which will delete the alert. The administrator sends an instruction to the storage node to delete the internal snapshot, which the storage node will delete. The administrator sends an instruction to the storage node to resume dual write feature traffic, which the storage node will resume.
The unexpected alarm indicates that the computing node does not perform formatting operations on the storage node, indicating that there is an intentional or unintentional command operation, virus software, or the like. The alert regarding unexpected includes one or more of the following operations: the administrator isolates the abnormal computing nodes and restores the source LUN data in an internal snapshot rollback mode; if the dual-write feature service exists, the administrator can recover the data on the logical unit number by means of the data of the remote device covering the data of the local device.
Referring to fig. 7, fig. 7 is a flowchart illustrating a process of executing a protection policy on a logical unit number according to an embodiment of the present application. As can be seen from fig. 7, the computing node issues a data writing operation to the logic unit number to the storage node, and after the storage node receives the data writing operation to the logic unit number, it determines whether the data carried by the data writing operation is all-zero data, if not, the storage node can continue to execute the subsequent operation of the data writing operation. If the data is all zero data, the storage node judges whether the first address of the data carried by the data writing operation is overlapped with the protection area address preset for the logic unit number. If there is no overlap, the storage node may continue to perform subsequent operations to the write data operation. If the overlapping range exists, the storage node generates alarm information, wherein the alarm information comprises alarm time and starter port identification of the computing node. The administrator can perform alarm processing according to the alarm information.
The storage node generates alarm information to indicate that the logic unit number LUN is possibly wrongly formatted, so that the storage node triggers the activation of the internal protection snapshot; alternatively, in case of a dual write feature service (say dual active service, synchronous replication, etc.) for a logical unit number LUN, the storage node will disconnect the remote device associated with the dual write feature service and set to manual recovery mode. After the processing operation is performed by the storage node, the storage node can continue to execute the subsequent operation of the data writing operation.
Next, describing the process of the manager after finding the alarm, the manager can find the corresponding computing node according to the initiator port identifier of the computing node contained in the alarm information, and determine whether the behavior of the computing node is predictable or unpredictable. If a normal write zero operation is contemplated (e.g., the data in the source logical unit number LUN is not preserved, the LUN needs to be formatted), the administrator sends an instruction to the storage node to delete the alert information, which will delete the alert. The administrator sends an instruction to the storage node to delete the internal snapshot, which the storage node will delete. The administrator sends an instruction to the storage node to resume dual write feature traffic, which the storage node will resume.
If it is an unexpected data write, the administrator may isolate the abnormal computing device and the storage node may restore the data in the logical unit number by internal snapshot rollback. If there is a dual write feature service, the storage node may recover the data in the logical unit number by way of the remote device overwriting the local device data.
The foregoing details of the method according to the embodiments of the present application and the apparatus according to the embodiments of the present application are provided below.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a storage node 80 according to an embodiment of the present application, where the storage node 80 is configured to implement the aforementioned method for incorrectly formatting the control logic unit number, for example, the method in the embodiment shown in fig. 5.
In a possible implementation, the storage node 80 may include a communication unit 801 and a processing unit 802.
In one possible implementation, the communication unit 801 is configured to receive a write data operation on a logical unit number;
a processing unit 802 for identifying that the write data operation is a formatting operation for the logical unit number;
the processing unit 802 is further configured to protect the logical unit number.
In yet another possible implementation, the processing unit 802 is specifically configured to:
and under the condition that the write data operation is determined to be a write zero operation, identifying that the write data operation is a formatting operation on the logic unit number.
In yet another possible implementation, the processing unit 802 is specifically configured to: and under the condition that the data writing operation is determined to be a non-first data writing operation, identifying that the non-first data writing operation is a formatting operation on the logic unit number.
In yet another possible implementation, the processing unit 802 is specifically configured to: and determining that the first address carried by the data writing operation falls into a protection area address preset for the logic unit number.
In yet another possible embodiment, the protection zone address comprises one or more of the following: the default address of the system, the address set by the user and the address identified by the system.
In yet another possible implementation, the processing unit 802 is specifically configured to: triggering the internal snapshot activation of the logical unit number; or alternatively, the process may be performed,
and disconnecting the remote equipment related to the double-writing characteristic service under the condition that the double-writing characteristic service exists in the logic unit number.
The present application also provides a computer readable storage medium having instructions stored therein that, when executed on at least one processor, implement the aforementioned method of control logic unit number being incorrectly formatted, such as the method of control logic unit number being incorrectly formatted shown in fig. 5.
The present application also provides a computer program product comprising computer instructions which, when executed by a computing device, implement the aforementioned method of control logic unit number being reformatted, such as the method of control logic unit number being reformatted shown in fig. 5.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Reference to "at least one" in embodiments of the application means one or more, and "a plurality" means two or more. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a. b, c, (a and b), (a and c), (b and c), or (a and b and c), wherein a, b, c may be single or plural. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: three cases of a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
And, unless otherwise indicated, the use of ordinal numbers such as "first," "second," etc., by embodiments of the present application is used for distinguishing between multiple objects and is not used for limiting a sequence, timing, priority, or importance of the multiple objects. For example, the first user device and the second user device are merely for convenience of description, and are not meant to represent differences in structure, importance, etc. of the first user device and the second user device, and in some embodiments, the first user device and the second user device may also be the same device.
As used in the above embodiments, the term "when … …" may be interpreted to mean "if … …" or "after … …" or "in response to determination … …" or "in response to detection … …" depending on the context. The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but is intended to cover any modifications, equivalents, alternatives, and improvements within the spirit and principles of the application.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Claims (14)

1. A method of controlling a logical unit number to be reformatted, comprising:
receiving a data writing operation on the logic unit number;
identifying that the write data operation is a formatting operation for the logical unit number;
and protecting the logic unit number.
2. The method of claim 1, wherein the identifying the write data operation is a formatting operation for the logical unit number, comprising:
And under the condition that the write data operation is determined to be a write zero operation, identifying that the write data operation is a formatting operation on the logic unit number.
3. The method of claim 1, wherein the identifying the write data operation is a formatting operation for the logical unit number, comprising:
and under the condition that the data writing operation is determined to be a non-first data writing operation, identifying that the non-first data writing operation is a formatting operation on the logic unit number.
4. A method according to any one of claims 1 to 3, wherein said identifying that the write data operation is a formatting operation for the logical unit number comprises:
and determining that the first address carried by the data writing operation falls into a protection area address preset for the logic unit number.
5. The method of claim 4, wherein the protection zone address comprises one or more of: the default address of the system, the address set by the user and the address identified by the system.
6. The method of any one of claims 1 to 5, wherein protecting the logical unit number comprises:
triggering the internal snapshot activation of the logical unit number; or alternatively, the process may be performed,
And disconnecting the remote equipment related to the double-writing characteristic service under the condition that the double-writing characteristic service exists in the logic unit number.
7. A storage node, comprising:
the communication unit is used for receiving data writing operation on the logic unit number;
a processing unit configured to identify that the write data operation is a formatting operation for the logical unit number;
the processing unit is further configured to protect the logic unit number.
8. The storage node according to claim 7, wherein the processing unit is specifically configured to:
and under the condition that the write data operation is determined to be a write zero operation, identifying that the write data operation is a formatting operation on the logic unit number.
9. The storage node according to claim 7, wherein the processing unit is specifically configured to:
and under the condition that the data writing operation is determined to be a non-first data writing operation, identifying that the non-first data writing operation is a formatting operation on the logic unit number.
10. The storage node according to any of the claims 7 to 9, characterized in that the processing unit is specifically configured to:
and determining that the first address carried by the data writing operation falls into a protection area address preset for the logic unit number.
11. The storage node of claim 10, wherein the protection zone address comprises one or more of: the default address of the system, the address set by the user and the address identified by the system.
12. The storage node according to any of the claims 7 to 11, characterized in that the processing unit is specifically configured to:
triggering the internal snapshot activation of the logical unit number; or alternatively, the process may be performed,
and disconnecting the remote equipment related to the double-writing characteristic service under the condition that the double-writing characteristic service exists in the logic unit number.
13. A storage node, the storage node comprising a processor and a memory;
the memory stores a computer program;
the storage node performs the method of any of the preceding claims 1 to 6 when the processor executes the computer program.
14. A computer readable storage medium having instructions stored therein which, when executed on at least one processor, implement the method of any one of claims 1 to 6.
CN202210228587.6A 2022-03-08 2022-03-08 Method for mistakenly formatting control logic unit number and related equipment Pending CN116774907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210228587.6A CN116774907A (en) 2022-03-08 2022-03-08 Method for mistakenly formatting control logic unit number and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210228587.6A CN116774907A (en) 2022-03-08 2022-03-08 Method for mistakenly formatting control logic unit number and related equipment

Publications (1)

Publication Number Publication Date
CN116774907A true CN116774907A (en) 2023-09-19

Family

ID=88006797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210228587.6A Pending CN116774907A (en) 2022-03-08 2022-03-08 Method for mistakenly formatting control logic unit number and related equipment

Country Status (1)

Country Link
CN (1) CN116774907A (en)

Similar Documents

Publication Publication Date Title
US10296494B2 (en) Managing a global namespace for a distributed filesystem
EP3502877B1 (en) Data loading method and apparatus for virtual machines
WO2018040591A1 (en) Remote data replication method and system
US7676628B1 (en) Methods, systems, and computer program products for providing access to shared storage by computing grids and clusters with large numbers of nodes
US9058123B2 (en) Systems, methods, and interfaces for adaptive persistence
US20050071560A1 (en) Autonomic block-level hierarchical storage management for storage networks
US8433888B2 (en) Network boot system
US9354907B1 (en) Optimized restore of virtual machine and virtual disk data
JP2004127294A (en) Virtual storage system and its operation method
WO2015200528A1 (en) Systems and methods for storage service automation
JP2007133471A (en) Storage device, and method for restoring snapshot
US11640244B2 (en) Intelligent block deallocation verification
US8140886B2 (en) Apparatus, system, and method for virtual storage access method volume data set recovery
JP2022539950A (en) Storage system, memory management method and management node
US8566541B2 (en) Storage system storing electronic modules applied to electronic objects common to several computers, and storage control method for the same
US11409451B2 (en) Systems, methods, and storage media for using the otherwise-unutilized storage space on a storage device
US20150019807A1 (en) Linearized dynamic storage pool
CN116009761A (en) Data writing method and related equipment
US9959278B1 (en) Method and system for supporting block-level incremental backups of file system volumes using volume pseudo devices
US10929255B2 (en) Reducing the size of fault domains
US11315028B2 (en) Method and apparatus for increasing the accuracy of predicting future IO operations on a storage system
CN116774907A (en) Method for mistakenly formatting control logic unit number and related equipment
CN112416652A (en) Data backup method and data backup
TWI564803B (en) Systems and methods for storage virtualization
EP4152163A1 (en) Method for processing metadata in storage device and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination