CN116560904A - NAS data backup disaster recovery method, system, terminal and storage medium - Google Patents

NAS data backup disaster recovery method, system, terminal and storage medium Download PDF

Info

Publication number
CN116560904A
CN116560904A CN202310370794.XA CN202310370794A CN116560904A CN 116560904 A CN116560904 A CN 116560904A CN 202310370794 A CN202310370794 A CN 202310370794A CN 116560904 A CN116560904 A CN 116560904A
Authority
CN
China
Prior art keywords
storage node
data
file system
service
disaster recovery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310370794.XA
Other languages
Chinese (zh)
Inventor
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310370794.XA priority Critical patent/CN116560904A/en
Publication of CN116560904A publication Critical patent/CN116560904A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1805Append-only file systems, e.g. using logs or journals to store data
    • G06F16/1815Journaling file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Abstract

The invention relates to the technical field of data storage, and particularly provides a NAS data backup disaster recovery method, a system, a terminal and a storage medium, wherein the backup method comprises the following steps: freezing a file system of a main storage node during data synchronization, and caching IO requests of a user side, wherein the file system cannot write or modify data in a frozen state; synchronously copying the data of the main storage node to the disaster recovery storage node; and confirming that the data copying process is executed, thawing the file system, and executing the cached IO request on the file system. The invention can avoid IO request loss by caching IO request during data backup, and ensure service continuity.

Description

NAS data backup disaster recovery method, system, terminal and storage medium
Technical Field
The invention relates to the technical field of data storage, in particular to a NAS data backup disaster recovery method, a NAS data backup disaster recovery system, a NAS data backup disaster recovery terminal and a NAS data backup disaster recovery storage medium.
Background
The NAS (Network Attached Storage: network attached storage) is simply referred to as a network-connected device having a data storage function, and is also referred to as a "network storage". With the wide spread of NAS services, the capability of customers to disaster recovery from different places of NAS is also becoming stronger. The remote disaster recovery function refers to that when a certain data center NAS service of a user fails, the file service can be switched to a storage system of the same city/different place disaster backup site to ensure the continuity of the service and the maximum possible integrity of data.
The current NAS service disaster recovery function needs to pinch off the client service when performing data synchronization, which may cause loss of the client service request during the period. And when the user terminal service is transferred to the storage system of the remote disaster recovery site during disaster recovery, the basic configuration of the target storage system is required to be carried out again, and the binding relation between the user terminal and the target storage system is established.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a NAS data backup disaster recovery method, a NAS data backup disaster recovery system, a NAS data backup disaster recovery terminal and a NAS data backup disaster recovery storage medium.
In a first aspect, the present invention provides a NAS data backup method, including:
freezing a file system of a main storage node during data synchronization, and caching IO requests of a user side, wherein the file system cannot write or modify data in a frozen state;
synchronously copying the data of the main storage node to the disaster recovery storage node;
and confirming that the data copying process is executed, thawing the file system, and executing the cached IO request on the file system.
In an alternative embodiment, freezing the file system of the primary storage node and caching the IO requests of the client during data synchronization includes:
periodically performing data synchronization based on a pre-configured timing period;
intercepting an IO request of a user side, and caching the IO request into a log file system;
monitoring the file system state, and if the file system state is in a frozen state, suspending reading and executing the IO request in the log file system.
In an alternative embodiment, synchronously copying data of the primary storage node to the disaster recovery storage node includes:
creating a primary volume snapshot for a primary volume of a primary storage node;
the primary volume data of the primary storage node is copied to a secondary volume of the disaster recovery storage node, the secondary volume having a data synchronized secondary volume snapshot volume.
In an alternative embodiment, confirming that the data replication process is performed, thawing the file system, and performing a cached IO request on the file system includes:
receiving prompt information of completion of synchronous copying of data fed back by a main storage node, and removing a freezing state of the file system;
and sequentially extracting and executing the IO requests according to the cache time sequence of the IO requests.
In a second aspect, the present invention provides a NAS data disaster recovery method, including:
confirming the failure of a main storage node, stopping a data synchronization program between the disaster backup storage node and the main node, and configuring a backup file system of the disaster backup storage node into writable data;
based on the network attached storage configuration file of the main storage node, configuring service IP and service parameters the same as the storage node in the disaster backup storage node;
configuring a sharing service at a disaster recovery storage node, and mounting a user side service based on the sharing service;
confirming the failure release of the main storage node, reversely synchronizing the data of the disaster recovery storage node to the main storage node and recovering the mounting relation between the user side service and the main storage node.
In an optional implementation manner, configuring a sharing service at a disaster recovery storage node, and mounting a user side service based on the sharing service includes:
after the network auxiliary storage service is started, a network auxiliary storage sharing service is created by configuring a storage system joining domain environment;
after the network file system is started, creating a network file system sharing service by creating a local user group or configuring a storage system to join an active directory domain environment;
based on the service IP of the disaster recovery storage node, communication is established with the user terminal, and the user terminal service is mounted to the network attached storage sharing service or the network file system sharing service.
In a third aspect, the present invention provides a NAS data backup system comprising:
the environment construction module is used for freezing a file system of the main storage node during data synchronization and caching IO requests of users, wherein the file system cannot write or modify data in a frozen state;
the data synchronization module is used for synchronously copying the data of the main storage node to the disaster recovery storage node;
and the service recovery module is used for confirming that the data copying process is finished, thawing the file system and executing the cached IO request on the file system.
In an alternative embodiment, the environment building module comprises:
a periodic start unit for periodically performing data synchronization based on a pre-configured timing period;
the request caching unit is used for intercepting the IO request of the user side and caching the IO request into the log file system;
and the state monitoring unit is used for monitoring the state of the file system, and if the state of the file system is in a frozen state, suspending reading and executing the IO request in the log file system.
In an alternative embodiment, the data synchronization module includes:
a snapshot creation unit, configured to create a primary volume snapshot for a primary volume of a primary storage node;
and the data copying unit is used for copying the data of the main volume of the main storage node to the slave volume of the disaster backup storage node, wherein the slave volume is provided with a slave volume snapshot volume with synchronous data.
In an alternative embodiment, the service restoration module includes:
the unfreezing execution unit is used for receiving prompt information of completion of synchronous copying of data fed back by the main storage node and unfreezing the file system;
and the service recovery unit is used for sequentially extracting and executing the IO requests according to the cache time sequence of the IO requests.
In a fourth aspect, the present invention provides a NAS data disaster recovery system, comprising:
the basic setting module is used for confirming the failure of the main storage node, stopping the data synchronization program between the disaster backup storage node and the main node, and configuring the backup file system of the disaster backup storage node into writable data;
the service configuration module is used for configuring service IP and service parameters identical to those of the storage nodes at the disaster recovery storage nodes based on the network attached storage configuration file of the main storage node;
the shared mounting module is used for configuring shared service at the disaster recovery storage node and mounting user side business based on the shared service;
and the automatic recovery module is used for confirming the fault release of the main storage node, reversely synchronizing the data of the disaster recovery storage node to the main storage node and recovering the mounting relation between the user side service and the main storage node.
In an alternative embodiment, the shared mount module includes:
the first creation unit is used for creating the network auxiliary storage sharing service by configuring the storage system to join the domain environment after the network auxiliary storage service is started;
the second creation unit is used for creating a network file system sharing service by creating a local user group or configuring a storage system to join an active directory domain environment after the network file system is started;
and the service mounting unit is used for establishing communication with the user terminal based on the service IP of the disaster recovery storage node and mounting the user terminal service to the network attached storage sharing service or the network file system sharing service.
In a fifth aspect, there is provided a terminal comprising:
a processor, a memory, wherein,
the memory is used for storing a computer program,
the processor is configured to call and run the computer program from the memory, so that the terminal performs the method of the terminal as described above.
In a sixth aspect, there is provided a computer storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of the above aspects.
The NAS data backup disaster recovery method, the NAS data backup disaster recovery system, the NAS data backup disaster recovery terminal and the storage medium have the advantages that IO requests are cached during data backup so as to avoid loss of the IO requests, meanwhile, shared services are created at disaster recovery storage nodes, user side business processing is recovered as soon as possible in a mode of mounting user side business, and business continuity of NAS data remote disaster recovery is improved.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart diagram of a NAS data backup method of one embodiment of the invention.
Fig. 2 is a schematic flow chart of a NAS data disaster recovery method according to one embodiment of the present invention.
FIG. 3 is a schematic flow chart of specific steps performed by one embodiment of the invention.
Fig. 4 is an exemplary diagram of a NAS data disaster recovery method according to one embodiment of the present invention.
Fig. 5 is an exemplary diagram of an automatic recovery procedure of a NAS data disaster recovery method according to an embodiment of the present invention.
FIG. 6 is a schematic block diagram of a NAS data backup system of one embodiment of the present invention.
FIG. 7 is a schematic block diagram of a NAS data disaster recovery system according to one embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The following explains key terms appearing in the present invention.
The NAS (Network Attached Storage: network attached storage) is simply referred to as a network-connected device having a data storage function, and is also referred to as a "network storage". It is a dedicated data storage server. The method takes data as a center, thoroughly separates the storage equipment from the server, and centrally manages the data, thereby releasing bandwidth, improving performance, reducing total ownership cost and protecting investment. Its cost is far lower than using server storage, while its efficiency is far higher than the latter.
The XFS is a high-performance log file system, has very high scalability and is very robust. The main characteristics include the following points: when unexpected downtime occurs, the XFS file system is adopted, firstly, the file system is started to have a log function, so that files on a disk can not be destroyed due to unexpected downtime, and the file system can quickly recover the content of the files of the disk in a short time according to the recorded log no matter how many files and data are stored on the current file system; the XFS file system adopts an optimization algorithm, the log record has very little influence on the whole file operation, the XFS inquiry and the storage space allocation are very fast, and the XFS file system can continuously provide fast response time; the system is expandable, the XFS is a full 64-bit file system, can support a storage space of millions of T bytes, is superior to support of oversized files and small-size files, supports an oversized number of directories, has a maximum supportable file size of 263=9x1018=9 exeytes, has a maximum file system size of 18 exeytes, uses a high table structure (B+ tree), ensures that the file system can quickly search and quickly allocate space, can continuously provide high-speed operation, and has the performance not limited by the number of directories and files in the directories; with transmission bandwidth, XFS can store data with performance approaching that of bare device I/O. In the test of a single file system, the throughput can reach 7GB per second, and the throughput can reach 4GB per second when the single file is read and written.
The common network file system protocol (CIFS) is a new proposed protocol that allows programs to access files on a remote Internet computer and requires that the computer provide services. CIFS uses client/server mode. The client requests a server program remotely located on the server to serve it. The server obtains the request and returns a response. CIFS is a common or open SMB protocol version and is used by Microsoft. The SMB protocol is a protocol for server file access and printing over a local area network. CIFS operates at a high level. CIFS may be regarded as an implementation of an application protocol such as file transfer protocol and hypertext transfer protocol.
In enterprise network informatization construction, an AD domain (active directory domain Active Directory Domain) is often used to uniformly manage PC terminals in a network. In the AD domain, the DC (domain controller) contains a database of information such as an account of the domain, a password, a computer belonging to the domain, and the like.
The NAS data backup method provided by the embodiment of the invention is executed by the computer equipment, and correspondingly, the NAS data backup system is operated in the computer equipment.
FIG. 1 is a schematic flow chart of a method of one embodiment of the invention. Wherein, the execution subject of fig. 1 may be a NAS data backup system. The order of the steps in the flow chart may be changed and some may be omitted according to different needs.
As shown in fig. 1, the method includes:
step 110, freezing a file system of a main storage node during data synchronization, and caching IO requests of a user side, wherein the file system cannot write or modify data in a frozen state;
step 120, synchronously copying the data of the main storage node to the disaster recovery storage node;
and step 130, confirming that the data copying process is executed, thawing the file system, and executing the cached IO request on the file system.
Fig. 2 is a schematic flow chart of a NAS data disaster recovery method according to one embodiment of the present invention. The execution body of fig. 2 may be a NAS data disaster recovery system. The order of the steps in the flow chart may be changed and some may be omitted according to different needs.
As shown in fig. 2, the method includes:
step 210, confirming the failure of the main storage node, stopping the data synchronization program between the disaster backup storage node and the main node, and configuring the backup file system of the disaster backup storage node into writable data;
step 220, based on the network attached storage configuration file of the main storage node, configuring service IP and service parameters identical to those of the storage node in the disaster recovery storage node;
step 230, configuring a sharing service at a disaster recovery storage node, and mounting a user terminal service based on the sharing service;
step 240, confirming the failure release of the main storage node, reversely synchronizing the data of the disaster recovery storage node to the main storage node and recovering the mounting relationship between the user side service and the main storage node.
In order to facilitate understanding of the present invention, the principle of the NAS data backup disaster recovery method of the present invention is used in the following to further describe the NAS data backup disaster recovery method provided by the present invention in combination with the process of performing backup disaster recovery on NAS data in the embodiment.
Specifically, referring to fig. 3, the nas data backup disaster recovery method includes:
s1, freezing a file system of a main storage node during data synchronization, and caching IO requests of a user side, wherein the file system cannot write or modify data in a frozen state.
First, the client configures a period of timing synchronization of NAS service, for example, 1h is performed once, and each time data synchronization is performed, an execution record is generated in a log.
Buffering IO requests sent by a user side into a log file system (XFS), setting a storage capacity threshold of the log file system, and executing the cleaning of the earliest buffered IO requests once the actual storage capacity reaches the threshold.
During normal business processing, IO requests are sequentially read from the log file system and executed, and the executed IO requests are deleted from the log file system.
During the execution of the data synchronization, the file system of the primary storage node is frozen, and the file system cannot write or modify data in the frozen state. The request to read data can still be performed normally at this time. And the read data request can be screened out from the log file system and processed normally.
S2, synchronously copying the data of the main storage node to the disaster recovery storage node.
Specifically, if the file system is successfully frozen, triggering a process of locally creating the snapshot volume of the primary volume, and simultaneously copying the data of the primary volume to the secondary volume of the disaster recovery storage node. And creating a secondary volume snapshot volume for the secondary volume of the disaster recovery storage node to carry out secondary backup.
The specific method for copying the data of the main volume to the slave volume of the disaster recovery storage node comprises the following steps: and positioning the storage position of the main volume data, and copying the updated data in the period of synchronizing the previous data to the current moment to the slave volume of the disaster recovery storage node as target data according to the recorded data updating condition and the previous data synchronization execution time.
And a master volume snapshot volume is created for the master volume of the master storage node, a slave volume snapshot volume is created for the slave volume of the disaster recovery storage node, and the data integrity of the master volume and the slave volume is ensured based on data synchronism verification with the snapshot volume.
And S3, confirming that the data copying process is executed, thawing the file system, and executing a cached IO request on the file system.
Receiving prompt information of completion of synchronous copying of data fed back by a main storage node, and removing a freezing state of the file system; and sequentially extracting and executing the IO requests according to the cache time sequence of the IO requests.
Specifically, the main storage node can feed back prompt information after monitoring that data transmission is completed. After defreezing the file system, the file system of the main storage node can write data and also can modify data. Processing of IO requests in the cache is resumed, at which point IO data in the XFS cache may be written to the NAS file system.
S4, confirming the failure of the main storage node, stopping the data synchronization program between the disaster backup storage node and the main node, and configuring the backup file system of the disaster backup storage node into writable data.
The method for confirming the failure of the main storage node comprises the following steps: receiving the offline prompt information of the main storage node, repeatedly receiving the request execution failure prompt information of the main storage node, and the like.
After confirming the failure of the primary storage node, stopping the replication relationship on the disaster recovery storage node and designating the file system to be writable.
S5, based on the network attached storage configuration file of the main storage node, configuring service IP and service parameters identical to those of the storage node in the disaster recovery storage node.
And configuring service IP of the NAS at the disaster recovery storage node, wherein the IP is used for establishing network communication connection with the user.
And rapidly configuring the disaster recovery storage nodes based on related service parameters (such as designated service types) in a pre-stored configuration file of the main storage nodes, so that NAS service configuration of the disaster recovery storage nodes is consistent with that of the main storage nodes.
S6, configuring a sharing service at the disaster recovery storage node, and mounting the user side business based on the sharing service.
After the network auxiliary storage service is started, a network auxiliary storage sharing service is created by configuring a storage system joining domain environment; after the network file system is started, creating a network file system sharing service by creating a local user group or configuring a storage system to join an active directory domain environment; based on the service IP of the disaster recovery storage node, communication is established with the user terminal, and the user terminal service is mounted to the network attached storage sharing service or the network file system sharing service.
Specifically, as shown in fig. 4, the disaster recovery storage node configures sharing related information (domain information/local user information/create sharing), establishes a shared resource consistent with a user service, configures NFS (network attached storage) sharing if the service selects NFS service, and configures CIFS sharing if the service is CIFS (network file system) service.
The sharing creation method of the NFS service comprises the following steps: and enabling the NFS service, and when the service site uses NIS or 1dap domain authentication, configuring the storage system to join the corresponding domain environment to complete NFS sharing creation.
The sharing creation method of the CIFS service comprises the following steps: and enabling CIFS service, creating local users (groups) or configuring a storage system to join the AD domain environment, thereby completing CIFS sharing creation.
After the configuration is completed, the user side is mounted to the shared service of the disaster recovery storage node, and the data storage service is continuously executed.
S7, confirming the failure release of the main storage node, reversely synchronizing the data of the disaster recovery storage node to the main storage node, and recovering the mounting relation between the user side service and the main storage node.
As shown in fig. 5, when the service site is restored, the service needs to be switched back to the service site. If the data writing operation occurs when the disaster recovery site takes over the service, the reverse synchronization operation is executed on the replication relationship, the newly added or modified data of the disaster recovery site is synchronized to the service site, the back-switching operation is carried out on the service site after the synchronization is completed, if the configuration is changed, the manual modification is needed on the service site, and after the back-switching is completed, the service site takes over the service user to mount and share.
In some embodiments, the NAS data backup system 600 may comprise a plurality of functional modules comprised of computer program segments. The computer programs of the individual program segments in the NAS data backup system 600 may be stored in a memory of a computer device and executed by at least one processor to perform the functions of NAS data backup (described in detail with reference to fig. 1).
In this embodiment, the NAS data backup system 600 may be divided into a plurality of functional modules according to the functions performed by the NAS data backup system, as shown in fig. 6. The functional module may include: an environment construction module 610, a data synchronization module 620, and a service recovery module 630. The module referred to in the present invention refers to a series of computer program segments capable of being executed by at least one processor and of performing a fixed function, stored in a memory. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
An environment setting up module 610, configured to freeze a file system of a primary storage node during data synchronization, and cache an IO request of a user side, where the file system cannot write or modify data in a frozen state;
a data synchronization module 620, configured to synchronously copy data of the primary storage node to the disaster recovery storage node;
the service recovery module 630 is configured to confirm that the data replication process is performed, defrost the file system, and execute the cached IO request on the file system.
Optionally, as an embodiment of the present invention, the environment building module includes:
a periodic start unit for periodically performing data synchronization based on a pre-configured timing period;
the request caching unit is used for intercepting the IO request of the user side and caching the IO request into the log file system;
and the state monitoring unit is used for monitoring the state of the file system, and if the state of the file system is in a frozen state, suspending reading and executing the IO request in the log file system.
Optionally, as an embodiment of the present invention, the data synchronization module includes:
a snapshot creation unit, configured to create a primary volume snapshot for a primary volume of a primary storage node;
and the data copying unit is used for copying the data of the main volume of the main storage node to the slave volume of the disaster backup storage node, wherein the slave volume is provided with a slave volume snapshot volume with synchronous data.
Optionally, as an embodiment of the present invention, the service recovery module includes:
the unfreezing execution unit is used for receiving prompt information of completion of synchronous copying of data fed back by the main storage node and unfreezing the file system;
and the service recovery unit is used for sequentially extracting and executing the IO requests according to the cache time sequence of the IO requests.
In this embodiment, the NAS data disaster recovery system 700 can be divided into a plurality of functional modules according to the functions performed by the same, as shown in fig. 7. The functional module may include: a base setup module 710, a service configuration module 720, a shared mount module 730, and a service restoration module 740. The module referred to in the present invention refers to a series of computer program segments capable of being executed by at least one processor and of performing a fixed function, stored in a memory. In the present embodiment, the functions of the respective modules will be described in detail in the following embodiments.
The base setting module 710 is configured to confirm a failure of the primary storage node, stop a data synchronization procedure between the disaster backup storage node and the primary node, and configure a backup file system of the disaster backup storage node to be writable with data;
the service configuration module 720 is configured to configure a service IP and a service parameter identical to a service parameter of the storage node at the disaster recovery storage node based on a network attached storage configuration file of the main storage node;
the sharing mounting module 730 is configured to configure a sharing service at the disaster recovery storage node, and mount a user service based on the sharing service;
and the service recovery module 740 is used for confirming the failure release of the main storage node, reversely synchronizing the data of the disaster recovery storage node to the main storage node and recovering the mounting relationship between the user side service and the main storage node.
Optionally, as an embodiment of the present invention, the shared mount module includes:
the first creation unit is used for creating the network auxiliary storage sharing service by configuring the storage system to join the domain environment after the network auxiliary storage service is started;
the second creation unit is used for creating a network file system sharing service by creating a local user group or configuring a storage system to join an active directory domain environment after the network file system is started;
and the service mounting unit is used for establishing communication with the user terminal based on the service IP of the disaster recovery storage node and mounting the user terminal service to the network attached storage sharing service or the network file system sharing service.
Fig. 8 is a schematic structural diagram of a terminal 800 according to an embodiment of the present invention, where the terminal 800 may be used to execute the NAS data backup disaster recovery method according to the embodiment of the present invention.
The terminal 800 may include: processor 810, memory 820, and communication module 830. The components may communicate via one or more buses, and it will be appreciated by those skilled in the art that the configuration of the server as shown in the drawings is not limiting of the invention, as it may be a bus-like structure, a star-like structure, or include more or fewer components than shown, or may be a combination of certain components or a different arrangement of components.
The memory 820 may be implemented by any type of volatile or non-volatile memory terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk, among other things, for storing instructions for execution by the processor 810. The execution of the instructions in memory 820, when executed by processor 810, enables terminal 800 to perform some or all of the steps in the method embodiments described below.
The processor 810 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by running or executing software programs and/or modules stored in the memory 820, and invoking data stored in the memory. The processor may be comprised of an integrated circuit (Integrated Circuit, simply referred to as an IC), for example, a single packaged IC, or may be comprised of a plurality of packaged ICs connected to the same function or different functions. For example, the processor 810 may include only a central processing unit (Central Processing Unit, simply CPU). In the embodiment of the invention, the CPU can be a single operation core or can comprise multiple operation cores.
And a communication module 830, configured to establish a communication channel, so that the storage terminal may communicate with other terminals. Receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium in which a program may be stored, which program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
Therefore, the present invention can avoid IO request loss by buffering IO request during data backup, and create shared service at disaster recovery storage node, so as to resume service processing at user end as soon as possible in a manner of mounting service at user end, thereby improving service continuity of disaster recovery in different places of NAS data.
It will be apparent to those skilled in the art that the techniques of embodiments of the present invention may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solution in the embodiments of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium such as a U-disc, a mobile hard disc, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, etc. various media capable of storing program codes, including several instructions for causing a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, etc.) to execute all or part of the steps of the method described in the embodiments of the present invention.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for the terminal embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference should be made to the description in the method embodiment for relevant points.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, e.g., the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with respect to each other may be through some interface, indirect coupling or communication connection of systems or modules, electrical, mechanical, or other form.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
Although the present invention has been described in detail by way of preferred embodiments with reference to the accompanying drawings, the present invention is not limited thereto. Various equivalent modifications and substitutions may be made in the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and it is intended that all such modifications and substitutions be within the scope of the present invention/be within the scope of the present invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A method for NAS data backup, comprising:
freezing a file system of a main storage node during data synchronization, and caching IO requests of a user side, wherein the file system cannot write or modify data in a frozen state;
synchronously copying the data of the main storage node to the disaster recovery storage node;
and confirming that the data copying process is executed, thawing the file system, and executing the cached IO request on the file system.
2. The method of claim 1, wherein freezing the file system of the primary storage node and caching the IO requests at the client during data synchronization comprises:
periodically performing data synchronization based on a pre-configured timing period;
intercepting an IO request of a user side, and caching the IO request into a log file system;
monitoring the file system state, and if the file system state is in a frozen state, suspending reading and executing the IO request in the log file system.
3. The method of claim 1, wherein synchronously copying data of the primary storage node to the disaster recovery storage node comprises:
creating a primary volume snapshot for a primary volume of a primary storage node;
the primary volume data of the primary storage node is copied to a secondary volume of the disaster recovery storage node, the secondary volume having a data synchronized secondary volume snapshot volume.
4. The method of claim 1, wherein confirming that the data replication process is performed, thawing the file system, and performing a cached IO request to the file system comprises:
receiving prompt information of completion of synchronous copying of data fed back by a main storage node, and removing a freezing state of the file system;
and sequentially extracting and executing the IO requests according to the cache time sequence of the IO requests.
5. A NAS data disaster recovery method, based on the NAS data backup method of any one of claims 1 to 4, comprising:
confirming the failure of a main storage node, stopping a data synchronization program between the disaster backup storage node and the main node, and configuring a backup file system of the disaster backup storage node into writable data;
based on the network attached storage configuration file of the main storage node, configuring service IP and service parameters the same as the storage node in the disaster backup storage node;
configuring a sharing service at a disaster recovery storage node, and mounting a user side service based on the sharing service;
confirming the failure release of the main storage node, reversely synchronizing the data of the disaster recovery storage node to the main storage node and recovering the mounting relation between the user side service and the main storage node.
6. The method of claim 5, wherein configuring a shared service at the disaster recovery storage node and mounting customer premise traffic based on the shared service comprises:
after the network auxiliary storage service is started, a network auxiliary storage sharing service is created by configuring a storage system joining domain environment;
after the network file system is started, creating a network file system sharing service by creating a local user group or configuring a storage system to join an active directory domain environment;
based on the service IP of the disaster recovery storage node, communication is established with the user terminal, and the user terminal service is mounted to the network attached storage sharing service or the network file system sharing service.
7. A NAS data backup system, comprising:
the environment construction module is used for freezing a file system of the main storage node during data synchronization and caching IO requests of users, wherein the file system cannot write or modify data in a frozen state;
the data synchronization module is used for synchronously copying the data of the main storage node to the disaster recovery storage node;
and the service recovery module is used for confirming that the data copying process is finished, thawing the file system and executing the cached IO request on the file system.
8. A NAS data disaster recovery system, comprising:
the basic setting module is used for confirming the failure of the main storage node, stopping the data synchronization program between the disaster backup storage node and the main node, and configuring the backup file system of the disaster backup storage node into writable data;
the service configuration module is used for configuring service IP and service parameters identical to those of the storage nodes at the disaster recovery storage nodes based on the network attached storage configuration file of the main storage node;
the shared mounting module is used for configuring shared service at the disaster recovery storage node and mounting user side business based on the shared service;
and the automatic recovery module is used for confirming the fault release of the main storage node, reversely synchronizing the data of the disaster recovery storage node to the main storage node and recovering the mounting relation between the user side service and the main storage node.
9. A terminal, comprising:
a memory for storing a program;
a processor, configured to implement the NAS data backup method according to any one of claims 1 to 4 or the NAS data disaster recovery method according to any one of claims 5 to 6 when executing the NAS data backup or disaster recovery program.
10. A computer readable storage medium storing a computer program, wherein a NAS data backup or disaster recovery program is stored on the readable storage medium, and the NAS data backup or disaster recovery program when executed by a processor implements the steps of the NAS data backup method according to any one of claims 1 to 4 or the NAS data disaster recovery method according to any one of claims 5 to 6.
CN202310370794.XA 2023-04-07 2023-04-07 NAS data backup disaster recovery method, system, terminal and storage medium Pending CN116560904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310370794.XA CN116560904A (en) 2023-04-07 2023-04-07 NAS data backup disaster recovery method, system, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310370794.XA CN116560904A (en) 2023-04-07 2023-04-07 NAS data backup disaster recovery method, system, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN116560904A true CN116560904A (en) 2023-08-08

Family

ID=87500880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310370794.XA Pending CN116560904A (en) 2023-04-07 2023-04-07 NAS data backup disaster recovery method, system, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN116560904A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194566A (en) * 2023-08-21 2023-12-08 泽拓科技(深圳)有限责任公司 Multi-storage engine data copying method, system and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194566A (en) * 2023-08-21 2023-12-08 泽拓科技(深圳)有限责任公司 Multi-storage engine data copying method, system and computer equipment
CN117194566B (en) * 2023-08-21 2024-04-19 泽拓科技(深圳)有限责任公司 Multi-storage engine data copying method, system and computer equipment

Similar Documents

Publication Publication Date Title
CN108255641B (en) CDP disaster recovery method based on cloud platform
CN106294585B (en) A kind of storage method under cloud computing platform
CN106250270B (en) A kind of data back up method under cloud computing platform
US7430616B2 (en) System and method for reducing user-application interactions to archivable form
CN101808127B (en) Data backup method, system and server
KR101969604B1 (en) Automatic configuration of a recovery service
WO2018154698A1 (en) File storage, object storage, and storage system
US20150019491A1 (en) Replication of Data Between Mirrored Data Sites
US20090222498A1 (en) System and method for system state replication
CN105407117B (en) The methods, devices and systems of distributed backup data
US11709743B2 (en) Methods and systems for a non-disruptive automatic unplanned failover from a primary copy of data at a primary storage system to a mirror copy of the data at a cross-site secondary storage system
CN106156359A (en) A kind of data synchronization updating method under cloud computing platform
CN107623703B (en) Synchronization method, device and system for Global Transaction Identifier (GTID)
KR20190049266A (en) Apparatus for controlling synchronization of metadata on network and method for the same
CN105630632A (en) Virtual machine recovery method and virtual machine management device
US20230385244A1 (en) Facilitating immediate performance of volume resynchronization with the use of passive cache entries
US9268811B1 (en) Replay of writes in replication log
CN116560904A (en) NAS data backup disaster recovery method, system, terminal and storage medium
CN108352995B (en) SMB service fault processing method and storage device
CN105404645A (en) File management method in file server system and file server system
US20160139996A1 (en) Methods for providing unified storage for backup and disaster recovery and devices thereof
CN104991739A (en) Method and system for refining primary execution semantics during metadata server failure substitution
CN116389233A (en) Container cloud management platform active-standby switching system, method and device and computer equipment
JP2017142605A (en) Backup restoration system and restoration method
WO2022227719A1 (en) Data backup method and system, and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination