CN115118738A - Disaster recovery backup method, device, equipment and medium based on RDMA - Google Patents

Disaster recovery backup method, device, equipment and medium based on RDMA Download PDF

Info

Publication number
CN115118738A
CN115118738A CN202211049105.7A CN202211049105A CN115118738A CN 115118738 A CN115118738 A CN 115118738A CN 202211049105 A CN202211049105 A CN 202211049105A CN 115118738 A CN115118738 A CN 115118738A
Authority
CN
China
Prior art keywords
memory
data
cluster
rdma
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211049105.7A
Other languages
Chinese (zh)
Other versions
CN115118738B (en
Inventor
李�杰
张卫
赵楠
肖东升
吕琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huarui Distributed Technology Co ltd
Original Assignee
Shenzhen Huarui Distributed Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huarui Distributed Technology Co ltd filed Critical Shenzhen Huarui Distributed Technology Co ltd
Priority to CN202211049105.7A priority Critical patent/CN115118738B/en
Publication of CN115118738A publication Critical patent/CN115118738A/en
Application granted granted Critical
Publication of CN115118738B publication Critical patent/CN115118738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17331Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection

Abstract

The invention relates to the technical field of data security, and provides a disaster recovery method, a disaster recovery device, equipment and a medium based on RDMA (remote direct memory access), wherein the method comprises the following steps: registering a first memory of a hot standby cluster to an RDMA driver, establishing an RDMA channel between a main cluster and the hot standby cluster based on the RDMA driver, acquiring data to be transmitted of the main cluster, asynchronously copying the data to be transmitted to the first memory through the RDMA channel, copying the data to be transmitted to a second memory through a network protocol stack, responding to a fault of the main cluster, acquiring data from the first memory and the second memory, filtering to obtain target data, and performing data processing by the hot standby cluster based on the target data instead of the main cluster. The invention can combine the optimized RDMA technology and the network protocol stack to realize the message complementation during the main-standby switching, reduce the data loss and the time delay in the message copying process, and improve the success rate of the main-standby switching, thereby realizing the close-range hot standby switching when the main cluster fails, and replacing the main cluster by the hot standby cluster to normally work.

Description

Disaster recovery backup method, device, equipment and medium based on RDMA
Technical Field
The invention relates to the technical field of data security, in particular to a disaster recovery method, a disaster recovery device, equipment and a medium based on RDMA.
Background
In the prior art, for a distributed system with a high availability requirement, a scheme of real-time synchronization in a cluster is generally adopted, although the backup requirement in a single server or partial servers in a fault scene is solved, the backup requirement cannot be met in a whole machine room or a building, so that a set of distributed system needs to be established in different places, and once a main system has a problem, the distributed system can be switched to a different-place disaster recovery system for transaction.
However, the problem of data loss and transmission path increase may exist in the remote disaster recovery handover process.
Disclosure of Invention
In view of the above, it is necessary to provide a disaster recovery method, apparatus, device and medium based on RDMA, which aims to solve the problems of message loss and low transmission efficiency during disaster recovery.
An RDMA-based disaster recovery method, comprising:
responding to a disaster recovery request to a main cluster, and establishing a hot recovery cluster for the main cluster;
when the hot standby cluster is started, allocating a first memory and a second memory in a server of the hot standby cluster, and notifying a memory address of the first memory and a memory address of the second memory to the main cluster;
registering the first memory to an RDMA driver, and establishing an RDMA channel between the main cluster and the hot standby cluster based on the RDMA driver;
acquiring data in a server memory of the main cluster as data to be transmitted;
asynchronously copying, by the RDMA channel, the data to be transmitted to the first memory based on a memory address of the first memory;
copying the data to be transmitted to the second memory based on the memory address of the second memory through a network protocol stack;
responding to the main cluster fault, and acquiring data from the first memory and the second memory;
filtering the acquired data to obtain target data;
and sending the target data to the hot standby cluster, and performing data processing by the hot standby cluster on the basis of the target data instead of the main cluster.
According to the preferred embodiment of the present invention, the physical distance between the hot standby cluster and the main cluster is less than or equal to the configuration distance;
and after the data to be transmitted is subjected to business processing by the main cluster through an application layer, the data to be transmitted is stored in a queue of a server memory of the main cluster.
According to a preferred embodiment of the present invention, the asynchronously copying, through the RDMA channel, the data to be transmitted to the first memory based on the memory address of the first memory includes:
starting an asynchronous thread, and sequentially reading the data to be transmitted from the queue based on the asynchronous thread;
and remotely writing the read data into the first memory.
According to a preferred embodiment of the present invention, the remotely writing the read data into the first memory includes:
when the first memory has available space, writing the read data into the first memory according to a reading sequence; or
When the first memory has no available space, covering the read data with the data in the first memory according to a configuration sequence;
the configuration sequence is a first-to-last writing sequence of data in the first memory.
According to a preferred embodiment of the present invention, the filtering the acquired data to obtain the target data includes:
determining a serial number of each data in the acquired data;
and deleting repeated data from the obtained data according to the serial number of each data to obtain the target data.
According to a preferred embodiment of the present invention, before the hot standby cluster performs data processing based on the target data instead of the main cluster, the method further includes:
detecting whether a fault exists in the target data;
when the target data do not have faults, the hot standby cluster replaces the main cluster to perform data processing on the basis of the target data; or
When the target data has faults, data processing is not performed by the hot standby cluster on the basis of the target data instead of the main cluster.
According to a preferred embodiment of the present invention, after the hot standby cluster performs data processing based on the target data instead of the main cluster, the method further includes:
when the hot standby cluster fails, the hot standby cluster transmits the target data to a remote disaster recovery cluster for processing;
the remote disaster recovery cluster is a cluster which is established in advance and has a physical distance with the main cluster greater than the configuration distance.
An RDMA-based disaster recovery device, the RDMA-based disaster recovery device comprising:
the system comprises an establishing unit, a judging unit and a judging unit, wherein the establishing unit is used for responding to a disaster recovery request of a main cluster and establishing a hot recovery cluster for the main cluster;
the distribution unit is used for distributing a first memory and a second memory in a server of the hot standby cluster when the hot standby cluster is started, and notifying a memory address of the first memory and a memory address of the second memory to the main cluster;
the establishing unit is further configured to register the first memory to an RDMA driver, and establish an RDMA channel between the main cluster and the hot-standby cluster based on the RDMA driver;
an obtaining unit, configured to obtain data in a server memory of the master cluster as data to be transmitted;
a copy unit, configured to asynchronously copy, through the RDMA channel, the data to be transmitted to the first memory based on a memory address of the first memory;
the copying unit is further configured to copy, through a network protocol stack, the data to be transmitted to the second memory based on the memory address of the second memory;
the acquiring unit is further configured to acquire data from the first memory and the second memory in response to the failure of the main cluster;
the filtering unit is used for filtering the acquired data to obtain target data;
and the processing unit is used for sending the target data to the hot standby cluster and carrying out data processing by the hot standby cluster instead of the main cluster on the basis of the target data.
A computer device, the computer device comprising:
a memory storing at least one instruction; and
a processor to execute instructions stored in the memory to implement the RDMA-based disaster-backup method.
A computer-readable storage medium having at least one instruction stored therein, the at least one instruction being executable by a processor in a computer device to implement the RDMA-based disaster recovery method.
According to the technical scheme, the method and the device can realize the quick writing of the message based on the RDMA and can effectively avoid data loss.
Drawings
FIG. 1 is a flow chart of the preferred embodiment of the RDMA-based disaster recovery method of the present invention.
Fig. 2 is a functional block diagram of a preferred embodiment of the RDMA-based disaster recovery apparatus of the present invention.
FIG. 3 is a block diagram of a computer device implementing the preferred embodiment of the RDMA-based disaster recovery method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of the preferred embodiment of the RDMA-based disaster-preparation method of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The RDMA-based disaster recovery method is applied to one or more computer devices, which are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware thereof includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive web Television (IPTV), an intelligent wearable device, and the like.
The computer device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The Network in which the computer device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
S10, responding to the disaster recovery request to the main cluster, and establishing a hot recovery cluster for the main cluster.
In this embodiment, the master cluster may be a cluster deployed in a high availability mode, and the devices in the master cluster may adopt a synchronous message replication manner to ensure consistency of messages of multiple copies in the same cluster.
In this embodiment, the disaster recovery request may be triggered by a relevant staff member that establishes the cluster, or may be automatically triggered after the main cluster is created, which is not limited in the present invention.
In this embodiment, a physical distance between the hot standby cluster and the main cluster is less than or equal to a configuration distance.
Wherein, the configuration distance can be configured by user, such as 20 km.
The hot standby cluster can be located at the same data center or the same data center of the main cluster and is close to the main cluster in physical distance.
S11, when the hot-standby cluster is started, allocating a first memory and a second memory in a server of the hot-standby cluster, and notifying the main cluster of a memory address of the first memory and a memory address of the second memory.
The first memory and the second memory may be configured to store data in different storage areas of a server of the hot-standby cluster.
Further, the memory address of the first memory and the memory address of the second memory are notified to the main cluster, so that the main cluster subsequently issues data to the hot-standby cluster according to the memory address of the first memory and the memory address of the second memory.
S12, registering the first Memory to an RDMA (Remote Direct Memory Access) driver, and establishing an RDMA channel between the main cluster and the hot standby cluster based on the RDMA driver.
In this embodiment, after the first memory is registered to the RDMA driver, direct communication between the main cluster server memory and the hot standby cluster server memory (i.e., the first memory) can be realized based on an RDMA channel, that is, data in the main cluster server memory is directly written into the hot standby cluster server memory, and the whole writing process hardly requires intervention of a CPU, so that the writing efficiency is high.
And S13, acquiring the data in the server memory of the main cluster as the data to be transmitted.
In this embodiment, the data to be transmitted is subjected to service processing by the master cluster through an application layer, and then stored in a queue of a server memory of the master cluster.
S14, asynchronously copying the data to be transmitted to the first memory based on the memory address of the first memory through the RDMA channel.
In this embodiment, the asynchronous replication process described above pertains to remote replication.
Specifically, the asynchronously copying, by the RDMA channel, the data to be transmitted to the first memory based on the memory address of the first memory includes:
starting an asynchronous thread, and sequentially reading the data to be transmitted from the queue based on the asynchronous thread;
and remotely writing the read data into the first memory.
Through the embodiment, the transmission process can be further optimized on the basis of the traditional RDMA communication, the transmission delay and the dependence on an operating system (direct communication between memories and almost no intervention of a system CPU) are reduced, and because the transmission process does not pass through the operating system and the transmission delay is extremely low, the message fall between the main cluster and the hot standby cluster can be remarkably reduced, so that the message loss quantity when a fault occurs is reduced, and higher data consistency can be realized on the basis of not reducing the availability of a distributed system.
Specifically, the remotely writing the read data into the first memory includes:
when the first memory has available space, writing the read data into the first memory according to a reading sequence; or
When the first memory has no available space, covering the read data with the data in the first memory according to a configuration sequence;
the configuration sequence is a first-to-last writing sequence of data in the first memory.
For example: if the storage space of the first memory can store 100 pieces of data, when the first memory already stores 90 pieces of data, the 91-100 positions are available, and the storage is started from the 91 position according to the reading sequence. Until 100 is full, the next read data is overwritten from the 1 st location.
In the above embodiment, since real-time replication of data can be ensured by using an optimized RDMA communication technology, and replication efficiency is higher, updating the data in the first memory in an overlay manner can further ensure that the data in the first memory is up-to-date.
And S15, copying the data to be transmitted to the second memory based on the memory address of the second memory through a network protocol stack.
The message is copied to the hot standby cluster through the network protocol stack, the copying process is greatly influenced by network fluctuation, the message rate of the hot standby cluster and the scheduling of an operating system, and a large message drop can be generated.
S16, responding to the failure of the main cluster, and acquiring data from the first memory and the second memory.
In this embodiment, when the main cluster fails, the hot standby cluster needs to replace the main cluster to continue working in order to ensure normal operation of the system.
Further, data are acquired from the first memory and the second memory, so that after the switching between the main cluster and the hot standby cluster is completed, the hot standby cluster can replace the main cluster to normally operate based on the acquired data.
The data in the first memory is copied based on optimized RDMA, and the message drop between the main cluster and the hot standby cluster generated due to asynchronous copying in the switching process can be reduced. The data stored in the first memory and the data stored in the second memory are subjected to message complementation, so that the stability of system data can be further improved, the success rate of main-standby switching can be improved, and the situation that the hot standby cluster cannot replace the main cluster to normally work due to data loss in the message copying process is avoided.
And S17, filtering the acquired data to obtain target data.
It can be understood that, because the optimized RDMA technology and the network protocol stack are used for message replication at the same time, partially repeated messages may occur in the first memory and the second memory, and the acquired data needs to be filtered to avoid message redundancy.
Specifically, the filtering the acquired data to obtain the target data includes:
determining a serial number of each data in the acquired data;
and deleting repeated data from the obtained data according to the serial number of each data to obtain the target data.
Wherein a sequence number for each data is generated by the master cluster.
S18, sending the target data to the hot standby cluster, and performing data processing by the hot standby cluster instead of the main cluster based on the target data.
In this embodiment, before the hot standby cluster performs data processing based on the target data instead of the main cluster, the method further includes:
detecting whether a fault exists in the target data;
when the target data do not have faults, the hot standby cluster replaces the main cluster to perform data processing on the basis of the target data; or
When the target data has faults, data processing is not performed by the hot standby cluster on the basis of the target data instead of the main cluster.
For example: when 10000 pieces of data are totally stored, the second memory stores the 1 st to 8000 th pieces of data, the first memory stores the 9000 th to 10000 th pieces of data in real time, faults appear in the 8001 th to 8999 th pieces of data, and at this time, even if the master-slave switching is performed, the hot-slave cluster cannot replace the master cluster to work, so the master-slave switching is not performed, that is, the hot-slave cluster does not perform data processing based on the target data replacing the master cluster; otherwise, normal switching is possible.
In this embodiment, after the hot standby cluster performs data processing based on the target data instead of the main cluster, the method further includes:
when the hot standby cluster fails, the hot standby cluster transmits the target data to a remote disaster recovery cluster for processing;
the remote disaster recovery cluster is a cluster which is established in advance and has a physical distance with the main cluster greater than the configuration distance.
For example: the disaster recovery cluster can be deployed in a different city from the main cluster.
In the above embodiment, when the master cluster fails, the hot standby cluster having a relatively strong distance from the master cluster is first used for switching, so that a transmission path is shortened, and the probability of data loss is reduced. And when the hot standby cluster fails, switching the remote disaster recovery standby cluster which is far away from the main cluster to realize a three-level disaster recovery system of the main cluster, the hot standby cluster and the remote disaster recovery standby cluster, so that when a disaster causes a data center level failure, the continuous availability of services can be ensured, and the stability of the system is further improved.
According to the technical scheme, the message complementation during the main-standby switching can be realized by combining the optimized RDMA technology and the network protocol stack, the data loss and the time delay in the message copying process are reduced, the success rate of the main-standby switching is improved, the short-distance hot standby switching is further realized when the main cluster fails, and the hot standby cluster replaces the main cluster to normally work.
Fig. 2 is a functional block diagram of a preferred embodiment of the RDMA-based disaster recovery apparatus according to the present invention. The RDMA-based disaster recovery device 11 includes a building unit 110, a distributing unit 111, an obtaining unit 112, a copying unit 113, a filtering unit 114, and a processing unit 115. A module/unit as referred to herein is a series of computer program segments stored in a memory that can be executed by a processor and that can perform a fixed function. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
In response to the disaster recovery request to the master cluster, the establishing unit 110 establishes a hot-standby cluster for the master cluster.
In this embodiment, the master cluster may be a cluster deployed in a high availability mode, and the devices in the master cluster may adopt a synchronous message replication manner to ensure consistency of messages of multiple copies in the same cluster.
In this embodiment, the disaster recovery request may be triggered by a relevant staff member that establishes the cluster, or may be automatically triggered after the main cluster is created, which is not limited in the present invention.
In this embodiment, a physical distance between the hot standby cluster and the main cluster is less than or equal to a configuration distance.
Wherein, the configuration distance can be configured by user, such as 20 km.
The hot standby cluster can be located at the same data center or the same data center of the main cluster and is close to the main cluster in physical distance.
When the hot standby cluster is started, the allocating unit 111 allocates a first memory and a second memory in the server of the hot standby cluster, and notifies the main cluster of the memory address of the first memory and the memory address of the second memory.
The first memory and the second memory may be configured to store data in different storage areas of a server of the hot-standby cluster.
Further, the memory address of the first memory and the memory address of the second memory are notified to the main cluster, so that the main cluster subsequently issues data to the hot-standby cluster according to the memory address of the first memory and the memory address of the second memory.
The establishing unit 110 registers the first Memory to an RDMA (Remote Direct Memory Access) driver, and establishes an RDMA channel between the main cluster and the hot-standby cluster based on the RDMA driver.
In this embodiment, after the first memory is registered to the RDMA driver, direct communication between the main cluster server memory and the hot standby cluster server memory (i.e., the first memory) can be realized based on an RDMA channel, that is, data in the main cluster server memory is directly written into the hot standby cluster server memory, and the whole writing process hardly requires intervention of a CPU, so that the writing efficiency is high.
The obtaining unit 112 obtains data in the server memory of the master cluster as data to be transmitted.
In this embodiment, the data to be transmitted is subjected to service processing by the master cluster through an application layer, and then stored in a queue of a server memory of the master cluster.
The copy unit 113 asynchronously copies the data to be transmitted to the first memory based on the memory address of the first memory through the RDMA channel.
In this embodiment, the asynchronous replication process described above pertains to remote replication.
Specifically, the asynchronously copying, by the copy unit 113, the data to be transmitted to the first memory based on the memory address of the first memory through the RDMA channel includes:
starting an asynchronous thread, and sequentially reading the data to be transmitted from the queue based on the asynchronous thread;
and remotely writing the read data into the first memory.
Through the embodiment, the transmission process can be further optimized on the basis of the traditional RDMA communication, the transmission delay and the dependence on an operating system (direct communication between memories and almost no intervention of a system CPU) are reduced, and the transmission process does not pass through the operating system and has extremely low transmission delay, so that the message fall between the main cluster and the hot standby cluster can be remarkably reduced, the message loss quantity when a fault occurs is reduced, and higher data consistency can be realized on the basis of not reducing the availability of a distributed system.
Specifically, the remote writing of the read data into the first memory by the copy unit 113 includes:
when the first memory has available space, writing the read data into the first memory according to a reading sequence; or
When the first memory has no available space, covering the read data with the data in the first memory according to a configuration sequence;
the configuration sequence is a first-to-last writing sequence of data in the first memory.
For example: if the storage space of the first memory can store 100 pieces of data, when the first memory already stores 90 pieces of data, the 91-100 positions are available, and the storage is started from the 91 position according to the reading sequence. Until 100 is full, the next read data is overwritten from the 1 st location.
In the above embodiment, since real-time replication of data can be ensured by using an optimized RDMA communication technology, and replication efficiency is higher, updating the data in the first memory in an overlay manner can further ensure that the data in the first memory is up-to-date.
The copying unit 113 copies the data to be transmitted to the second memory based on the memory address of the second memory through a network protocol stack.
The message is copied to the hot standby cluster through the network protocol stack, the copying process is greatly influenced by network fluctuation, the message rate of the hot standby cluster and the scheduling of an operating system, and a large message drop can be generated.
In response to the failure of the primary cluster, the obtaining unit 112 obtains data from the first memory and the second memory.
In this embodiment, when the main cluster fails, the hot standby cluster needs to replace the main cluster to continue working in order to ensure normal operation of the system.
Further, data are acquired from the first memory and the second memory, so that after the switching between the main cluster and the hot standby cluster is completed, the hot standby cluster can replace the main cluster to normally operate based on the acquired data.
The data in the first memory is copied based on optimized RDMA, and the message drop between the main cluster and the hot standby cluster generated due to asynchronous copying in the switching process can be reduced. The data stored in the first memory and the data stored in the second memory are subjected to message complementation, so that the stability of system data can be further improved, the success rate of main-standby switching can be improved, and the situation that the hot standby cluster cannot replace the main cluster to normally work due to data loss in the message copying process is avoided.
The filtering unit 114 filters the acquired data to obtain target data.
It can be understood that, because the optimized RDMA technology and the network protocol stack are used for message replication at the same time, partially repeated messages may occur in the first memory and the second memory, and the acquired data needs to be filtered to avoid message redundancy.
Specifically, the filtering unit 114 filters the acquired data to obtain target data, including:
determining a serial number of each data in the acquired data;
and deleting repeated data from the obtained data according to the serial number of each data to obtain the target data.
Wherein a sequence number for each data is generated by the master cluster.
The processing unit 115 sends the target data to the hot standby cluster, and the hot standby cluster replaces the main cluster to perform data processing based on the target data.
In this embodiment, before the hot standby cluster replaces the main cluster for data processing based on the target data, whether a fault exists in the target data is detected;
when the target data do not have faults, the hot standby cluster replaces the main cluster to perform data processing on the basis of the target data; or
When the target data has faults, data processing is not performed by the hot standby cluster on the basis of the target data instead of the main cluster.
For example: when 10000 pieces of data are in total, the second memory stores the 1 st to 8000 st data, the first memory stores the 9000 th to 10000 th data in real time, the 8001 st to 8999 th data have faults, and at this time, even if the master-slave switching is performed, the hot-slave cluster cannot replace the master cluster to work, so the master-slave switching is not performed, that is, the hot-slave cluster does not perform data processing based on the target data replacing the master cluster; otherwise, normal switching is possible.
In this embodiment, after the hot-standby cluster performs data processing based on the target data instead of the main cluster, when the hot-standby cluster fails, the hot-standby cluster transmits the target data to the remote disaster recovery cluster for processing;
the remote disaster recovery cluster is a cluster which is established in advance and has a physical distance with the main cluster greater than the configuration distance.
For example: the disaster recovery cluster can be deployed in a different city from the main cluster.
In the above embodiment, when the master cluster fails, the hot standby cluster having a relatively strong distance from the master cluster is first used for switching, so that a transmission path is shortened, and the probability of data loss is reduced. And when the hot standby cluster fails, switching the remote disaster recovery standby cluster which is far away from the main cluster to realize a three-level disaster recovery system of the main cluster, the hot standby cluster and the remote disaster recovery standby cluster, so that when a disaster causes a data center level failure, the continuous availability of services can be ensured, and the stability of the system is further improved.
According to the technical scheme, the message complementation during the main-standby switching can be realized by combining the optimized RDMA technology and the network protocol stack, the data loss and the time delay in the message copying process are reduced, the success rate of the main-standby switching is improved, the short-distance hot standby switching is further realized when the main cluster fails, and the hot standby cluster replaces the main cluster to normally work.
Fig. 3 is a schematic structural diagram of a computer device according to a preferred embodiment of the present invention for implementing an RDMA-based disaster recovery method.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program, e.g. a RDMA-based disaster recovery program, stored in the memory 12 and executable on the processor 13.
It will be understood by those skilled in the art that the schematic diagram is merely an example of the computer device 1, and does not constitute a limitation to the computer device 1, the computer device 1 may have a bus-type structure or a star-shaped structure, the computer device 1 may further include more or less other hardware or software than those shown, or different component arrangements, for example, the computer device 1 may further include an input and output device, a network access device, etc.
It should be noted that the computer device 1 is only an example, and other electronic products that are currently available or may come into existence in the future, such as electronic products that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
The memory 12 includes at least one type of readable storage medium, which includes flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, e.g. a removable hard disk of the computer device 1. The memory 12 may also be an external storage device of the computer device 1 in other embodiments, such as a plug-in removable hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 may be used not only for storing application software installed in the computer device 1 and various types of data, such as codes of RDMA-based disaster recovery programs, etc., but also for temporarily storing data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the computer device 1, connects various components of the whole computer device 1 by using various interfaces and lines, and executes various functions and processes data of the computer device 1 by running or executing programs or modules (for example, executing RDMA-based disaster recovery programs and the like) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes the operating system of the computer device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in the various RDMA-based disaster recovery method embodiments described above, such as the steps shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to implement the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the computer device 1. For example, the computer program may be divided into a creation unit 110, an assignment unit 111, an acquisition unit 112, a replication unit 113, a filtering unit 114, a processing unit 115.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute parts of the RDMA-based disaster recovery method according to the embodiments of the present invention.
The integrated modules/units of the computer device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), random-access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 3, but this does not mean only one bus or one type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the computer device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 13 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The computer device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the computer device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the computer device 1 and other computer devices.
Optionally, the computer device 1 may further comprise a user interface, which may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the computer device 1 and for displaying a visualized user interface.
It is to be understood that the embodiments described are illustrative only and are not to be construed as limiting the scope of the claims.
Fig. 3 shows only the computer device 1 with the components 12-13, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the computer device 1 and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
In connection with fig. 1, the memory 12 in the computer device 1 stores a plurality of instructions to implement an RDMA-based disaster recovery method, which the processor 13 can execute to implement:
responding to a disaster recovery request to a main cluster, and establishing a hot recovery cluster for the main cluster;
when the hot standby cluster is started, allocating a first memory and a second memory in a server of the hot standby cluster, and notifying a memory address of the first memory and a memory address of the second memory to the main cluster;
registering the first memory to an RDMA driver, and establishing an RDMA channel between the main cluster and the hot standby cluster based on the RDMA driver;
acquiring data in a server memory of the main cluster as data to be transmitted;
asynchronously copying, by the RDMA channel, the data to be transmitted to the first memory based on a memory address of the first memory;
copying the data to be transmitted to the second memory based on the memory address of the second memory through a network protocol stack;
responding to the fault of the main cluster, and acquiring data from the first memory and the second memory;
filtering the acquired data to obtain target data;
and sending the target data to the hot standby cluster, and performing data processing by the hot standby cluster on the basis of replacing the main cluster with the target data.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
It should be noted that all data involved in the present application are legally acquired.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The invention is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the present invention may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. An RDMA-based disaster recovery method, characterized in that the RDMA-based disaster recovery method comprises:
responding to a disaster recovery request to a main cluster, and establishing a hot recovery cluster for the main cluster;
when the hot standby cluster is started, allocating a first memory and a second memory in a server of the hot standby cluster, and notifying a memory address of the first memory and a memory address of the second memory to the main cluster;
registering the first memory to an RDMA driver, and establishing an RDMA channel between the main cluster and the hot standby cluster based on the RDMA driver;
acquiring data in a server memory of the main cluster as data to be transmitted;
asynchronously copying, by the RDMA channel, the data to be transmitted to the first memory based on a memory address of the first memory;
copying the data to be transmitted to the second memory based on the memory address of the second memory through a network protocol stack;
responding to the main cluster fault, and acquiring data from the first memory and the second memory;
filtering the acquired data to obtain target data;
and sending the target data to the hot standby cluster, and performing data processing by the hot standby cluster on the basis of the target data instead of the main cluster.
2. The RDMA-based disaster preparation method of claim 1, wherein:
the physical distance between the hot standby cluster and the main cluster is smaller than or equal to the configuration distance;
and after the data to be transmitted is subjected to service processing by the main cluster through an application layer, the data to be transmitted is stored in a queue of a server memory of the main cluster.
3. The RDMA-based disaster-preparation method of claim 2, wherein the asynchronously copying the data to be transmitted to the first memory based on the memory address of the first memory over the RDMA channel comprises:
starting an asynchronous thread, and sequentially reading the data to be transmitted from the queue based on the asynchronous thread;
and remotely writing the read data into the first memory.
4. The RDMA-based disaster recovery method of claim 3 wherein said remotely writing the read data to the first memory comprises:
when the first memory has available space, writing the read data into the first memory according to a reading sequence; or
When the first memory has no available space, covering the read data with the data in the first memory according to a configuration sequence;
the configuration sequence is a first-to-last writing sequence of data in the first memory.
5. The RDMA-based disaster recovery method of claim 1 wherein the filtering the acquired data to obtain target data comprises:
determining a serial number of each data in the acquired data;
and deleting repeated data from the obtained data according to the serial number of each data to obtain the target data.
6. The RDMA-based disaster-backup method of claim 1, wherein prior to the data processing by the hot-standby cluster based on the target data in place of the primary cluster, the method further comprises:
detecting whether a fault exists in the target data;
when the target data do not have faults, the hot standby cluster replaces the main cluster to perform data processing on the basis of the target data; or alternatively
When the target data has faults, data processing is not performed by the hot standby cluster on the basis of the target data instead of the main cluster.
7. The RDMA-based disaster recovery method of claim 2, wherein after the data processing by the hot-standby cluster based on the target data in place of the primary cluster, the method further comprises:
when the hot standby cluster fails, the hot standby cluster transmits the target data to a remote disaster recovery cluster for processing;
the remote disaster recovery cluster is a cluster which is established in advance and has a physical distance with the main cluster larger than the configuration distance.
8. An RDMA-based disaster recovery device, the RDMA-based disaster recovery device comprising:
the system comprises an establishing unit, a judging unit and a judging unit, wherein the establishing unit is used for responding to a disaster recovery request of a main cluster and establishing a hot recovery cluster for the main cluster;
the distribution unit is used for distributing a first memory and a second memory in a server of the hot standby cluster when the hot standby cluster is started, and notifying a memory address of the first memory and a memory address of the second memory to the main cluster;
the establishing unit is further configured to register the first memory to an RDMA driver, and establish an RDMA channel between the main cluster and the hot-standby cluster based on the RDMA driver;
an obtaining unit, configured to obtain data in a server memory of the master cluster as data to be transmitted;
a copy unit, configured to asynchronously copy, through the RDMA channel, the data to be transmitted to the first memory based on a memory address of the first memory;
the copying unit is further configured to copy, through a network protocol stack, the data to be transmitted to the second memory based on the memory address of the second memory;
the acquiring unit is further configured to acquire data from the first memory and the second memory in response to the failure of the main cluster;
the filtering unit is used for filtering the acquired data to obtain target data;
and the processing unit is used for sending the target data to the hot standby cluster and carrying out data processing by the hot standby cluster instead of the main cluster on the basis of the target data.
9. A computer device, characterized in that the computer device comprises:
a memory storing at least one instruction; and
a processor executing instructions stored in the memory to implement the RDMA-based disaster recovery method of any of claims 1 to 7.
10. A computer-readable storage medium characterized by: the computer-readable storage medium having stored therein at least one instruction that is executable by a processor in a computer device to implement the RDMA-based disaster-backup method of any of claims 1-7.
CN202211049105.7A 2022-08-30 2022-08-30 Disaster recovery method, device, equipment and medium based on RDMA Active CN115118738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211049105.7A CN115118738B (en) 2022-08-30 2022-08-30 Disaster recovery method, device, equipment and medium based on RDMA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211049105.7A CN115118738B (en) 2022-08-30 2022-08-30 Disaster recovery method, device, equipment and medium based on RDMA

Publications (2)

Publication Number Publication Date
CN115118738A true CN115118738A (en) 2022-09-27
CN115118738B CN115118738B (en) 2022-11-22

Family

ID=83336252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211049105.7A Active CN115118738B (en) 2022-08-30 2022-08-30 Disaster recovery method, device, equipment and medium based on RDMA

Country Status (1)

Country Link
CN (1) CN115118738B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116225789A (en) * 2023-05-09 2023-06-06 深圳华锐分布式技术股份有限公司 Transaction system backup capability detection method, device, equipment and medium
CN116760835A (en) * 2023-08-15 2023-09-15 深圳华锐分布式技术股份有限公司 Distributed storage method, device and medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120233122A1 (en) * 2011-03-10 2012-09-13 Amadeus S.A.S System and method for session synchronization with independent external systems
US20150254100A1 (en) * 2014-03-10 2015-09-10 Riverscale Ltd Software Enabled Network Storage Accelerator (SENSA) - Storage Virtualization Offload Engine (SVOE)
US20160139849A1 (en) * 2014-11-13 2016-05-19 Violin Memory, Inc. Non-volatile buffering for deduplicaton
US20170034270A1 (en) * 2015-07-31 2017-02-02 Netapp, Inc. Methods and systems for efficiently moving data between nodes in a cluster
CN106874150A (en) * 2017-02-28 2017-06-20 郑州云海信息技术有限公司 A kind of virtual machine High Availabitity disaster recovery method and its system
CN106933701A (en) * 2015-12-30 2017-07-07 伊姆西公司 For the method and apparatus of data backup
US20170235702A1 (en) * 2016-02-17 2017-08-17 International Business Machines Corporation Remote direct memory access-based method of transferring arrays of objects including garbage data
CN107454171A (en) * 2017-08-10 2017-12-08 深圳前海微众银行股份有限公司 Message service system and its implementation
US20190227880A1 (en) * 2018-01-24 2019-07-25 International Business Machines Corporation Automated and distributed backup of sensor data
CN110113420A (en) * 2019-05-08 2019-08-09 重庆大学 Distributed Message Queue management system based on NVM
US10649862B1 (en) * 2018-12-18 2020-05-12 International Business Machines Corporation Reducing failback performance duration in data replication systems
US20200167251A1 (en) * 2018-11-27 2020-05-28 International Business Machines Corporation Storage system management
CN113535480A (en) * 2021-07-16 2021-10-22 深圳华锐金融技术股份有限公司 Data disaster recovery system and method
CN114090349A (en) * 2021-11-18 2022-02-25 广州新科佳都科技有限公司 Cross-regional service disaster tolerance method and device based on main cluster server and standby cluster server
CN114265713A (en) * 2021-12-15 2022-04-01 阿里巴巴(中国)有限公司 RDMA event management method, device, computer equipment and storage medium
CN114721995A (en) * 2022-04-01 2022-07-08 上海上讯信息技术股份有限公司 Data transmission method applied to virtual database and RDMA-based database virtualization method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120233122A1 (en) * 2011-03-10 2012-09-13 Amadeus S.A.S System and method for session synchronization with independent external systems
US20150254100A1 (en) * 2014-03-10 2015-09-10 Riverscale Ltd Software Enabled Network Storage Accelerator (SENSA) - Storage Virtualization Offload Engine (SVOE)
US20160139849A1 (en) * 2014-11-13 2016-05-19 Violin Memory, Inc. Non-volatile buffering for deduplicaton
US20170034270A1 (en) * 2015-07-31 2017-02-02 Netapp, Inc. Methods and systems for efficiently moving data between nodes in a cluster
CN106933701A (en) * 2015-12-30 2017-07-07 伊姆西公司 For the method and apparatus of data backup
US20170235702A1 (en) * 2016-02-17 2017-08-17 International Business Machines Corporation Remote direct memory access-based method of transferring arrays of objects including garbage data
CN106874150A (en) * 2017-02-28 2017-06-20 郑州云海信息技术有限公司 A kind of virtual machine High Availabitity disaster recovery method and its system
CN107454171A (en) * 2017-08-10 2017-12-08 深圳前海微众银行股份有限公司 Message service system and its implementation
US20190227880A1 (en) * 2018-01-24 2019-07-25 International Business Machines Corporation Automated and distributed backup of sensor data
US20200167251A1 (en) * 2018-11-27 2020-05-28 International Business Machines Corporation Storage system management
US10649862B1 (en) * 2018-12-18 2020-05-12 International Business Machines Corporation Reducing failback performance duration in data replication systems
CN110113420A (en) * 2019-05-08 2019-08-09 重庆大学 Distributed Message Queue management system based on NVM
CN113535480A (en) * 2021-07-16 2021-10-22 深圳华锐金融技术股份有限公司 Data disaster recovery system and method
CN114090349A (en) * 2021-11-18 2022-02-25 广州新科佳都科技有限公司 Cross-regional service disaster tolerance method and device based on main cluster server and standby cluster server
CN114265713A (en) * 2021-12-15 2022-04-01 阿里巴巴(中国)有限公司 RDMA event management method, device, computer equipment and storage medium
CN114721995A (en) * 2022-04-01 2022-07-08 上海上讯信息技术股份有限公司 Data transmission method applied to virtual database and RDMA-based database virtualization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘路等: ""基于动态连接的RDMA可靠传输协议设计"", 《计算机工程与科学》 *
肖超恩等: "Rowhammer漏洞研究进展", 《北京电子科技学院学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116225789A (en) * 2023-05-09 2023-06-06 深圳华锐分布式技术股份有限公司 Transaction system backup capability detection method, device, equipment and medium
CN116225789B (en) * 2023-05-09 2023-08-11 深圳华锐分布式技术股份有限公司 Transaction system backup capability detection method, device, equipment and medium
CN116760835A (en) * 2023-08-15 2023-09-15 深圳华锐分布式技术股份有限公司 Distributed storage method, device and medium
CN116760835B (en) * 2023-08-15 2023-10-20 深圳华锐分布式技术股份有限公司 Distributed storage method, device and medium

Also Published As

Publication number Publication date
CN115118738B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN115118738B (en) Disaster recovery method, device, equipment and medium based on RDMA
CN111538573A (en) Asynchronous task processing method and device and computer readable storage medium
CN115617403A (en) Clearing task execution method, device, equipment and medium based on task segmentation
CN114691050B (en) Cloud native storage method, device, equipment and medium based on kubernets
CN114816820A (en) Method, device, equipment and storage medium for repairing chproxy cluster fault
CN114124968B (en) Load balancing method, device, equipment and medium based on market data
CN112328677B (en) Lost data recovery method, device, equipment and medium based on table association
CN114675976B (en) GPU (graphics processing Unit) sharing method, device, equipment and medium based on kubernets
CN115687384A (en) UUID (user identifier) identification generation method, device, equipment and storage medium
CN114741422A (en) Query request method, device, equipment and medium
CN114371962A (en) Data acquisition method and device, electronic equipment and storage medium
CN114547011A (en) Data extraction method and device, electronic equipment and storage medium
CN115277376B (en) Disaster recovery switching method, device, equipment and medium
CN113687834B (en) Distributed system node deployment method, device, equipment and medium
CN115065642B (en) Code table request method, device, equipment and medium under bandwidth limitation
CN116860508B (en) Distributed system software defect continuous self-healing method, device, equipment and medium
CN114860349B (en) Data loading method, device, equipment and medium
CN115543214B (en) Data storage method, device, equipment and medium in low-delay scene
CN116225789B (en) Transaction system backup capability detection method, device, equipment and medium
CN115174698B (en) Market data decoding method, device, equipment and medium based on table entry index
CN116418896B (en) Task execution method, device, equipment and medium based on timer
CN117851520A (en) Data synchronization method, system, equipment and medium of securities core transaction engine
CN115934576B (en) Test case generation method, device, equipment and medium in transaction scene
CN114116427A (en) Abnormal log writing method, device, equipment and medium
CN114139199A (en) Data desensitization method, apparatus, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant