CN115134217A - Data processing method, device, equipment and storage medium - Google Patents

Data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115134217A
CN115134217A CN202210696715.XA CN202210696715A CN115134217A CN 115134217 A CN115134217 A CN 115134217A CN 202210696715 A CN202210696715 A CN 202210696715A CN 115134217 A CN115134217 A CN 115134217A
Authority
CN
China
Prior art keywords
cluster
message queue
disaster recovery
request
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210696715.XA
Other languages
Chinese (zh)
Inventor
易锋
贺姜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengcaiyun Co ltd
Original Assignee
Zhengcaiyun Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengcaiyun Co ltd filed Critical Zhengcaiyun Co ltd
Priority to CN202210696715.XA priority Critical patent/CN115134217A/en
Publication of CN115134217A publication Critical patent/CN115134217A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Hardware Redundancy (AREA)

Abstract

The application discloses a data processing method, a device, equipment and a storage medium, which relate to the technical field of data management, and the method comprises the following steps: acquiring request flow about service access when a cluster provides services to the outside and judging the current working mode of the cluster; when the cluster is in a first working mode that both the main cluster and the disaster recovery cluster are available, the request flow is forwarded to the disaster recovery cluster message queues corresponding to the main cluster and the disaster recovery cluster; when the cluster is in a second working mode that the main cluster is unavailable and the disaster recovery cluster is available, judging the request type of the request flow, and then determining the forwarding path of the request flow so as to process the request flow based on the forwarding path; and when the system is in a third working mode that the main cluster is recovered to be available from unavailable, judging whether a main cluster message queue corresponding to the main cluster has message accumulation or not, and processing the request flow according to a judgment result. By the method and the device, the consistency of data in the main cluster and the disaster recovery cluster can be guaranteed, and the stability of external services is guaranteed.

Description

Data processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of data management technologies, and in particular, to a data processing method, apparatus, device, and storage medium.
Background
In the internet system, due to a high reliability requirement for providing a service to the outside, a main cluster and a disaster recovery cluster are generally designed. When the main cluster has a problem, the main cluster is switched to another disaster recovery cluster in time, so that external services are uninterrupted, and the two clusters are matched with each other to provide services for the outside. Because the main cluster and the disaster recovery cluster need to be served externally, the data stored in the two clusters must be consistent, otherwise, problems are caused to upstream business logic. For example, in an e-commerce system, a user has placed an order to purchase goods. When the main cluster is unavailable, after the system is switched to the disaster recovery cluster, the data of the user such as purchase records, payment records and the like are required to be consistent with the main cluster, otherwise, a series of problems such as complaints and the like are caused; similarly, the purchase order is that the logistics order number of the purchase order updates a piece of logistics information (for example, logistics is already at a transit point), if the main cluster is recovered to be available at the moment, the logistics information of the system is consistent with the disaster recovery cluster after the disaster recovery cluster is switched back to the main cluster, and otherwise, the problems of logistics information loss and the like are caused.
In the prior art, as shown in fig. 1, a system is composed of a main cluster and a disaster recovery cluster, and in a normal operating mode, that is, in a case where the main cluster is available, a request traffic is forwarded to the main cluster; in a disaster recovery working mode, namely under the condition that the main cluster is unavailable, the request flow is forwarded to the disaster recovery cluster; in the disaster recovery mode, i.e., when the primary cluster changes from unavailable to available, the request traffic is retransmitted to the primary cluster. Against this background, this solution has the following drawbacks: 1. in a normal working mode, the main cluster and the disaster recovery cluster do not achieve data consistency, and request flow data are written into the main cluster but not into the disaster recovery cluster, so that when the main cluster is unavailable to be switched to the disaster recovery cluster, the same flow requested to be input is different from the fed-back data, data inconsistency is caused, dirty data can be caused, and service logic is influenced. 2. In the disaster recovery mode, the request traffic data is written into the disaster recovery cluster, but no mechanism is written into the main cluster, so that when the main cluster is recovered to be available, the request traffic and the fed-back data are found to be different, which causes data inconsistency, dirty data and influences service logic. 3. In disaster recovery mode of operation, it is also a big challenge how to handle data not written to the primary cluster during periods when the primary cluster is unavailable.
In summary, how to solve the problem of data consistency in the normal working mode, the disaster recovery working mode, and the disaster recovery working mode, seamless connection and automatic switching are realized in different working modes, so as to ensure the stability of external services.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a data processing method, apparatus, device and storage medium, which can solve the problem of data consistency in a normal working mode, a disaster recovery working mode and a disaster recovery working mode, and implement seamless connection and automatic switching in different working modes, thereby ensuring stability of external services. The specific scheme is as follows:
in a first aspect, the present application discloses a data processing method, including:
acquiring request flow about service access when a cluster provides services to the outside, and judging the current working mode of the cluster;
when the working mode is a first working mode that a main cluster and a disaster recovery cluster are both in an available state, forwarding the request traffic to a disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue;
when the working mode is a second working mode that the main cluster is in an unavailable state and the disaster recovery cluster is in an available state, judging the request type of the request flow, and then determining a forwarding path of the request flow according to the request type so as to process the request flow based on the forwarding path;
and when the working mode is a third working mode which is recovered to the available state from the unavailable state of the main cluster, judging whether message accumulation exists in a main cluster message queue corresponding to the main cluster, and processing the request flow according to a judgment result.
Optionally, when the working mode is a first working mode in which both the main cluster and the disaster recovery cluster are in an available state, forwarding the request traffic to a disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster, so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue, including:
when the working mode is a first working mode in which both the main cluster and the disaster recovery cluster are in an available state, forwarding the request traffic to the main cluster so that the main cluster can process the request traffic;
and forwarding the request flow to a disaster recovery cluster message queue corresponding to the disaster recovery cluster, and monitoring the disaster recovery cluster message queue through the disaster recovery cluster so that the disaster recovery cluster asynchronously consumes the disaster recovery cluster message queue.
Optionally, the data processing method further includes:
and sequentially consuming the messages of the main cluster message queue and the disaster-tolerant cluster message queue through a sequential consumption strategy in a Rockettq or kafka.
Optionally, the determining a request type of the requested traffic, and then determining a forwarding path of the requested traffic according to the request type, so as to process the requested traffic based on the forwarding path, includes:
if the request type is read request traffic, forwarding the request traffic to the disaster recovery cluster so that the disaster recovery cluster can process the request traffic;
if the request type is write request flow, forwarding the request flow to the main cluster message queue, and monitoring the main cluster message queue through the disaster tolerance cluster message queue, so that the main cluster consumes the main cluster message queue as empty, the disaster tolerance cluster asynchronously consumes the disaster tolerance cluster message queue, and the consumption is stopped until the disaster tolerance cluster message queue is empty.
Optionally, the determining whether a message pile exists in a primary cluster message queue corresponding to the primary cluster, and processing the request traffic according to a determination result includes:
if the main cluster message queue has message accumulation, monitoring the main cluster message queue through the main cluster, and consuming the messages in the main cluster message queue in sequence until the main cluster message queue is empty, and stopping consumption;
executing the step of forwarding the request traffic to the disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue;
and if the main cluster message queue does not have message accumulation, directly executing the step of forwarding the request traffic to the main cluster and the disaster recovery cluster message queue corresponding to the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue.
Optionally, the data processing method further includes:
correspondingly dividing the main cluster and the disaster recovery cluster into a plurality of consumer groups;
and monitoring different partitions in corresponding cluster message queues by using different consumer nodes in the consumer group, and performing batch consumption on messages in the partitions.
Optionally, after the batch consumption is performed on the messages in the partition, the method further includes:
and when the consumption of a batch of messages is finished, sending a signal ACK of the finished consumption to the server of the corresponding cluster message queue.
In a second aspect, the present application discloses a data processing apparatus comprising:
a request flow obtaining module, configured to obtain a request flow related to service access when a cluster provides an external service;
the working mode judging module is used for judging the working mode of the current cluster;
a first working mode module, configured to forward the request traffic to a disaster recovery cluster message queue corresponding to a primary cluster and a disaster recovery cluster when the working mode is a first working mode in which both the primary cluster and the disaster recovery cluster are in an available state, so as to process the request traffic based on the primary cluster and the disaster recovery cluster message queue;
a second working mode module, configured to, when the working mode is a second working mode in which the primary cluster is in an unavailable state and the disaster recovery cluster is in an available state, determine a request type of the requested traffic, and then determine a forwarding path of the requested traffic according to the request type, so as to process the requested traffic based on the forwarding path;
and the third working mode module is used for judging whether the message accumulation exists in a main cluster message queue corresponding to the main cluster when the working mode is the third working mode recovered from the unavailable state of the main cluster to the available state, and processing the request flow according to the judgment result.
In a third aspect, the present application discloses an electronic device comprising a processor and a memory; wherein the memory is used for storing a computer program which is loaded and executed by the processor to implement the data processing method as described above.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program realizes the data processing method as described before when executed by a processor.
In the method, request traffic about service access when a cluster provides services to the outside is obtained, and the current working mode of the cluster is judged; when the working mode is a first working mode in which both a main cluster and a disaster recovery cluster are in an available state, forwarding the request traffic to a disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue; when the working mode is a second working mode that the main cluster is in an unavailable state and the disaster recovery cluster is in an available state, judging the request type of the request flow, and then determining a forwarding path of the request flow according to the request type so as to process the request flow based on the forwarding path; and when the working mode is a third working mode which is recovered to the available state from the unavailable state of the main cluster, judging whether message accumulation exists in a main cluster message queue corresponding to the main cluster, and processing the request flow according to a judgment result. Therefore, the method and the system are mainly used for management of the main cluster and the disaster recovery cluster. Under a normal working mode, namely a first working mode that a main cluster and a disaster tolerance cluster are both in an available state, request flow is simultaneously forwarded to the main cluster and a disaster tolerance cluster message queue, the disaster tolerance cluster message queue is consumed in a disaster tolerance cluster consumption message mode, and the problem of data consistency of the main cluster and the disaster tolerance cluster is solved under the condition that the performance pressure of the disaster tolerance cluster is not increased; in a disaster recovery working mode, namely a second working mode that a main cluster is in an unavailable state and a disaster recovery cluster is in an available state, different forwarding ways of request traffic are determined according to different request types, so that the problem of how to send the request traffic to the disaster recovery cluster in the mode is solved, the request traffic can be fed back normally, and meanwhile, data are stored in the main cluster in time through a system, so that the data consistency of the main cluster and the disaster recovery cluster is realized; in the disaster recovery mode, that is, the third mode in which the unavailable state of the main cluster is recovered to the available state, the request traffic is processed after judging whether the message accumulation exists in the message queue of the main cluster, and in this case, after the disaster recovery mode, the data consistency of the main cluster and the disaster recovery cluster can be ensured, so that the normal working module is smoothly switched to. In addition, the message is processed based on the main cluster message queue and the disaster recovery cluster message queue, so that the requirement on the performance of the corresponding cluster is not high, the request flow is stored in the corresponding message queue in the period of high peak flow, the effect of peak flow reduction can be achieved, the system stability can be effectively guaranteed by monitoring the current cluster state, the seamless connection automatic switching is carried out, and the stability of external service is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating a management scheme of a master cluster and a disaster recovery cluster in the prior art;
FIG. 2 is a flow chart of a data processing method disclosed herein;
FIG. 3 is a schematic diagram of an overall data processing method disclosed herein;
FIG. 4 is a flow chart of a specific data processing method in a first operating mode disclosed herein;
FIG. 5 is a schematic diagram of a data processing method in a first operating mode according to the present disclosure;
FIG. 6 is a flow chart of a data processing method in a second specific operating mode disclosed herein;
FIG. 7 is a schematic diagram of a data processing method in a second operating mode according to the present disclosure;
FIG. 8 is a flow chart of a data processing method in a third specific operating mode disclosed herein;
FIG. 9 is a schematic diagram of a data processing method in a third operating mode according to the present disclosure;
FIG. 10 is a block diagram of a data processing apparatus according to the present disclosure;
fig. 11 is a block diagram of an electronic device disclosed in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Currently, when a master cluster and a disaster recovery cluster provide services to the outside, data inconsistency may cause dirty data, affect business logic, and also cannot solve how to process data that is not written into the master cluster during the period when the master cluster is unavailable.
Therefore, the data processing scheme is provided, the problem of data consistency in a normal working mode, a disaster recovery working mode and a disaster recovery working mode can be solved, seamless connection and automatic switching in different working modes are achieved, and therefore stability of external services is guaranteed.
The embodiment of the invention discloses a data processing method, which is shown in figure 2 and comprises the following steps:
step S11: and acquiring request traffic related to service access when the cluster provides services to the outside, and judging the current working mode of the cluster.
In the embodiment of the application, since the cluster provides the service to the outside, when there is a need to access the service provided by the cluster, the request traffic is generated, and at this time, how to process the request traffic is further determined according to the working mode of the current cluster.
Step S12: and when the working mode is a first working mode in which both the main cluster and the disaster recovery cluster are in an available state, forwarding the request traffic to a disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue.
In the embodiment of the application, if the cluster is in a normal working mode, that is, in a state where both the main cluster and the disaster recovery cluster are available, the request traffic enters the cluster. Because the current working mode is normal, the main cluster is empty in the main cluster Message Queue (MQ) corresponding to the main cluster in the available state, under the condition, the request flow is forwarded to the main cluster, and the main cluster processes the request flow and feeds back data and the like according to the request flow, so that the processing of the request flow is finished; and meanwhile, the request flow is also forwarded to a disaster tolerance cluster MQ message queue, the request flow is converted into MQ, the message sent by the request flow is taken as a producer of the message queue, and an MQ message corresponding to topic (subject) is sent to the cluster according to the information in the request parameter.
It should be noted that, in the process of message queue consumption, in order to maximize the speed of message queue message consumption, consumers adopt the same consumption group and multiple nodes to deploy, one consumer node in each consumption group monitors one partition queue, and simultaneously selects a batch consumption mode to perform multi-thread message consumption, and after a batch of message consumption is completed, ACK is unified to the corresponding message queue server. Specifically, the main cluster and the disaster recovery cluster are correspondingly divided into a plurality of consumer groups; and monitoring different partitions in corresponding cluster message queues by using different consumer nodes in the consumer group, and performing batch consumption on messages in the partitions. And when a batch of messages are consumed completely, sending a signal ACK of the consumption completion to a server of the corresponding cluster message queue.
Step S13: and when the working mode is a second working mode that the main cluster is in an unavailable state and the disaster recovery cluster is in an available state, judging the request type of the request traffic, and then determining the forwarding path of the request traffic according to the request type so as to process the request traffic based on the forwarding path.
In the embodiment of the application, if the cluster is in the disaster recovery working mode, that is, the main cluster is unavailable, and the disaster recovery cluster is available, the flow is requested to enter the cluster. Because the master cluster is in the unavailable state, the system will determine whether the request traffic is a read traffic request or a write traffic request. It can be understood that the read request traffic does not involve data writing, and therefore only needs to be processed by the disaster recovery cluster in the processing process, and thus the consistency of the data is not damaged. Therefore, after the request type of the request traffic is judged, different forwarding paths need to be determined according to the request type, so as to further realize the consistency of data maintenance according to different request types.
Step S14: and when the working mode is a third working mode which is recovered to the available state from the unavailable state of the main cluster, judging whether message accumulation exists in a main cluster message queue corresponding to the main cluster, and processing the request flow according to a judgment result.
In the embodiment of the application, if the cluster is in the disaster recovery working mode, that is, the main cluster is recovered from the unavailable state to the available state, the main cluster requests the traffic to enter the cluster. The system detects whether the main cluster MQ message queue is piled up or not because the main cluster is monitored to be available at the moment, if the main cluster MQ message queue is piled up, the situation that the messages generated by the system in the disaster recovery working mode are not completely consumed, the data are not completely processed, and the data of the main cluster and the disaster recovery cluster are not consistent is shown; if the messages are not stacked, the messages in the main cluster MQ message queue are completely consumed, and the system enters a normal working mode.
Fig. 3 is a schematic diagram of the framework of the overall solution of the present application. The disaster recovery system comprises a cluster disaster recovery management module, a main cluster, a disaster recovery cluster, a main cluster MQ message queue and a disaster recovery cluster MQ message queue. 1. The main functional modules of the cluster disaster recovery management module further comprise a main cluster available monitoring module, a disaster recovery cluster available monitoring module, MQ message queue monitoring, flow monitoring and the like. The main functions are to monitor the available states of the main cluster and the disaster recovery cluster, automatically perform seamless connection and automatic switching and management on the flow according to the states of the clusters, and simultaneously forward the request flow to the MQ message queue, thereby ensuring the stability of external services. 2. The main cluster and the disaster recovery cluster are management objects of the system and are used for storing service related data. Under normal conditions, a common es (elastic search) cluster is only needed, and it is only needed to ensure that the versions of the primary and standby clusters are consistent. In the system, the main cluster and the disaster recovery cluster do not need to be modified additionally, and the system can be directly used without additional configuration. 3. The master cluster MQ message queue. a. The message producer converts the information sent for the request flow into MQ messages according to the request sequence, and stores the MQ messages in sequence. There are two cases of ingress from the requested traffic: when the main cluster is available and the MQ message queue has message accumulation, requesting flow forwarding; when the primary cluster is unavailable, the non-read request traffic is forwarded in. b. There are 2 consumers of this message: a main cluster and a disaster recovery cluster MQ message queue. The master cluster monitors the message queue and consumes the messages in the message queue one by one according to the sequence of the message queue until the message queue is empty, and the master cluster does not consume any more. And monitoring the message queue by the disaster tolerance cluster MQ message queue, and asynchronously consuming the messages in the message queue according to the sequence of the message queue until the message queue is empty, so that the disaster tolerance cluster MQ message queue is not consumed any more. 4. And D, disaster tolerance cluster MQ message queues. a. This message producer has two: information sent by request traffic and a master cluster MQ message queue. And the information sent by the request flow is converted into MQ messages according to the request sequence, and the MQ messages are stored and enter an MQ message queue according to the sequence. And the disaster tolerance cluster MQ message queue monitors the main cluster MQ message queue, and asynchronously consumes the messages in the message queue according to the sequence of the message queue until the message queue is empty, so that the disaster tolerance cluster MQ message queue can not be consumed. b. The message consumer is a disaster tolerant cluster. And the disaster tolerance cluster monitors the message queue and consumes the messages in the message queue according to the sequence of the message queue until the message queue is empty, and the messages are not consumed.
Therefore, the method and the system are mainly used for management of the main cluster and the disaster recovery cluster. Under a normal working mode, namely a first working mode that a main cluster and a disaster recovery cluster are both in an available state, request flow is simultaneously forwarded to the main cluster and a disaster recovery cluster message queue, the disaster recovery cluster message queue is consumed in a disaster recovery cluster consumption message mode, and the problem of data consistency of the main cluster and the disaster recovery cluster is solved under the condition that the performance pressure of the disaster recovery cluster is not increased; in a disaster recovery working mode, namely a second working mode that a main cluster is in an unavailable state and a disaster recovery cluster is in an available state, different forwarding ways of request traffic are determined according to different request types, so that the problem of how to send the request traffic to the disaster recovery cluster in the mode is solved, the request traffic can be fed back normally, and meanwhile, data are stored in the main cluster in time through a system, so that the data consistency of the main cluster and the disaster recovery cluster is realized; in the disaster recovery working mode, namely the third working mode that the unavailable state of the main cluster is recovered to the available state, the request flow is processed after judging whether the message accumulation exists in the message queue of the main cluster, and in this case, after the disaster recovery working mode, the data consistency of the main cluster and the disaster recovery cluster can be ensured, so that the normal working module can be smoothly switched. In addition, the message is processed based on the main cluster message queue and the disaster tolerance cluster message queue, so that the performance requirement on the corresponding cluster is not high, the traffic peak clipping effect can be achieved due to the fact that the request traffic is stored in the corresponding message queue in the period of high traffic peak, the system stability can be effectively guaranteed by monitoring the current cluster state, seamless connection automatic switching can be carried out, and the stability of external services can be guaranteed.
The embodiment of the present application discloses a specific data processing method in a first working mode, and as shown in fig. 4, the method includes:
step S21: and when the working mode is a first working mode in which the main cluster and the disaster recovery cluster are both in an available state, forwarding the request traffic to the main cluster so that the main cluster can process the request traffic.
In the embodiment of the application, because the master cluster is in the available state, the MQ message queue of the master cluster is empty at this time, the request traffic is forwarded to the master cluster, and the master cluster processes the request traffic according to the request traffic and feeds back data and the like, so that the processing of the request traffic is finished.
Step S22: and forwarding the request traffic to a disaster recovery cluster message queue corresponding to the disaster recovery cluster, and monitoring the disaster recovery cluster message queue through the disaster recovery cluster so that the disaster recovery cluster asynchronously consumes the disaster recovery cluster message queue.
In the embodiment of the application, the request traffic is also forwarded to the disaster tolerance cluster MQ message queue. The request flow is converted into MQ, and the message sent by the request flow is used as the producer of the message queue to send the MQ message corresponding to topic to the cluster according to the information in the request parameter. And the disaster tolerance cluster monitors the message queue and asynchronously consumes the messages in the message queue according to the message sequence of the message queue until the message queue is empty, and the disaster tolerance cluster MQ message queue is not consumed.
As shown in fig. 5, which is a schematic view of a flow framework in a first working mode in this embodiment, when request traffic enters a cluster, and a main cluster is in an available state at this time, and a message pile does not exist in a main cluster MQ message queue, the request traffic is forwarded to the main cluster and a disaster tolerance cluster MQ message queue, and a disaster tolerance cluster consumes messages in the disaster tolerance cluster MQ message queue. It can be seen from the process that the request traffic is forwarded to the primary cluster and the disaster recovery cluster at the same time, thereby ensuring the data consistency of the primary cluster and the disaster recovery cluster.
In this embodiment of the present application, when the working mode is a first working mode in which both a main cluster and a disaster recovery cluster are in an available state, forwarding the request traffic to the main cluster, so that the main cluster processes the request traffic; and forwarding the request traffic to a disaster recovery cluster message queue corresponding to the disaster recovery cluster, and monitoring the disaster recovery cluster message queue through the disaster recovery cluster so that the disaster recovery cluster asynchronously consumes the disaster recovery cluster message queue. Therefore, the problem of data consistency of the main cluster and the disaster tolerance cluster is solved by synchronizing the request flow to the disaster tolerance cluster by using the disaster tolerance cluster MQ message queue under the condition that the main cluster is available and the performance pressure of the disaster tolerance cluster is not increased in the normal working mode. Meanwhile, the disaster tolerance cluster is in a form of using asynchronous consumption messages, so that the performance requirement on the disaster tolerance cluster is not high. In the period of high peak flow, the flow is stored in the MQ queue, so that the effect of flow peak clipping can be achieved, and the system stability can be effectively guaranteed.
The embodiment of the present application discloses a specific data processing method in the second operating mode, as shown in fig. 6, the method includes:
step S31: and when the working mode is a second working mode that the main cluster is in an unavailable state and the disaster recovery cluster is in an available state, judging the request type of the request flow.
In the embodiment of the present application, there are two cases for the request type of the requested traffic: therefore, when the main cluster is in an unavailable state, in order to ensure that the cluster can continue to provide service to the outside, the request type of the request traffic needs to be judged first, and the forwarding path of the request traffic is further determined according to the request type.
Step S32: and if the request type is read request traffic, forwarding the request traffic to the disaster recovery cluster so that the disaster recovery cluster can process the request traffic.
In the embodiment of the application, under the condition of reading the request traffic, the request traffic is forwarded to the disaster recovery cluster, and the disaster recovery cluster feeds back data and the like according to the read request traffic, so that the request traffic is processed. In this case, since the traffic is read request traffic, data writing is not involved, and although only the disaster recovery cluster performs processing, the data of the master cluster and the disaster recovery cluster are consistent.
Step S33: if the request type is write request traffic, forwarding the request traffic to the main cluster message queue, and monitoring the main cluster message queue through the disaster tolerance cluster message queue, so that the main cluster consumes the main cluster message queue as empty, the disaster tolerance cluster asynchronously consumes the disaster tolerance cluster message queue, and the consumption is stopped until the disaster tolerance cluster message queue is empty.
In the embodiment of the application, under the condition of writing request flow, the request flow is forwarded to a main cluster MQ message queue, the main cluster monitors the main cluster MQ message queue and consumes the messages in the main cluster MQ message queue according to the sequence of the message queue until the message queue is empty, and the main cluster does not consume any more. And the disaster tolerance cluster MQ message queue monitors the messages in the main cluster MQ message queue, and adopts an asynchronous consumption mode until the main cluster MQ message queue is empty, the disaster tolerance cluster consumes the messages in the disaster tolerance cluster MQ message queue according to the message sequence until the disaster tolerance cluster MQ message queue is empty, and the disaster tolerance cluster does not consume any more.
In the embodiment of the application, the messages in the main cluster message queue and the disaster recovery cluster message queue are sequentially consumed through a sequential consumption strategy in a Rocktmq or kafka. It should be noted that, because the main cluster MQ message queue needs to guarantee the sequentiality of the same service domain (e.g., order) message, in terms of technical model selection, either rocktmq or kafka may be used. For the Rocktmq, a sequence message of the Rocktmq can be selected, a unique identifier of a service domain, such as an order number, is used as a sharkdingKey parameter, and the ordering of each service message is realized by utilizing the characteristic of ordered partition of the Rocktmq; for kafka, a key is defined for each message by using a message order-preserving policy of a producer partition of kafka at a producer end, wherein the key can be a unique identifier (such as an order number) of a service domain, under the policy, the kafka ensures that messages of the same key enter the same partition, and the kafka ensures that the inside of the partition is ordered, so that the ordered processing of the messages is completed by using the characteristic of the kafka.
As shown in fig. 7, which is a schematic view of a flow framework in a second working mode in this embodiment, a request traffic enters a cluster, and at this time, a main cluster is in an unavailable state, so as to further determine whether the request traffic is a read request traffic. If the request flow is the read request flow, the request flow is forwarded to the disaster recovery cluster, and the disaster recovery cluster feeds back data and the like according to the read request flow, so that the request flow is processed completely, and the process is ended. If the request flow is not the read request flow, the request flow is proved to be the write request flow, the request flow is sent to a main cluster MQ message queue, the main cluster monitors the main cluster MQ message queue, and the messages in the message queue are consumed according to the sequence of the message queue until the message queue is empty, and the main cluster does not consume any more. And when the main cluster MQ message queue is empty, the disaster tolerance cluster consumes the messages in the disaster tolerance cluster MQ message queue according to the message sequence, and the disaster tolerance cluster does not consume any more until the message queue is empty.
In this embodiment of the present application, when the working mode is a second working mode in which the primary cluster is in an unavailable state and the disaster recovery cluster is in an available state, determining a request type of the request traffic; if the request type is read request traffic, forwarding the request traffic to the disaster recovery cluster so that the disaster recovery cluster can process the request traffic; if the request type is write request traffic, forwarding the request traffic to the main cluster message queue, and monitoring the main cluster message queue through the disaster tolerance cluster message queue, so that the main cluster consumes the main cluster message queue as empty, the disaster tolerance cluster asynchronously consumes the disaster tolerance cluster message queue, and the consumption is stopped until the disaster tolerance cluster message queue is empty. It can be seen that, in the case of read request traffic, the traffic request is forwarded to the disaster recovery cluster, because the read traffic does not involve data writing, and although only processed by the disaster recovery cluster, the data of the primary cluster and the disaster recovery cluster are consistent. Under the condition of write request flow, because the write request flow is write flow, the write request flow is sequentially written into the main cluster MQ message queue and the disaster-tolerant cluster MQ message queue, so that the write request flow is consumed by the main cluster and the disaster-tolerant cluster in sequence, and the data consistency of the main cluster and the disaster-tolerant cluster is ensured. Meanwhile, as the disaster recovery cluster uses the asynchronous consumption message, the performance requirement of the disaster recovery cluster is not high, the traffic peak clipping effect can be achieved, and the system stability can be effectively guaranteed. The problem that under the disaster recovery working mode and the condition that a main cluster is unavailable, the read request flow is sent to the disaster recovery cluster, the request flow can be normally fed back, and the service logic can be normally fed back is solved. And the write request flow transmits the data to the main cluster and the disaster recovery cluster through the main cluster MQ message queue and the disaster recovery MQ message queue, so that the data consistency of the main cluster and the disaster recovery cluster is realized.
The embodiment of the present application discloses a specific data processing method in the third operating mode, as shown in fig. 8, the method includes:
step S41: and when the working mode is a third working mode which is recovered to an available state from an unavailable state of the main cluster, judging whether message accumulation exists in a main cluster message queue corresponding to the main cluster.
In the embodiment of the present application, since the master cluster is restored to the available state from the unavailable state, the system enters the third working mode, and in order to ensure data consistency, it needs to determine whether message accumulation exists in the master cluster MQ message queue, and in this case, the system needs to process the messages in the master cluster MQ message queue first.
Step S42: if the main cluster message queue has message accumulation, monitoring the main cluster message queue through the main cluster, and consuming the messages in the main cluster message queue in sequence until the main cluster message queue is empty, and stopping consumption.
In the embodiment of the present application, if there is accumulation in the MQ message queue of the master cluster, it indicates that the messages generated by the system in the second working mode are not completely consumed, data is not completely processed, and data of the master cluster and the disaster recovery cluster are not consistent. In this case, the message needs to be processed continuously, the request traffic is forwarded to the main cluster MQ message queue, the main cluster monitors the main cluster MQ message queue and consumes the messages in the message queue according to the message order until the message queue is empty, the main cluster does not consume any more, and at this time, the system enters a normal operating mode, that is, the first operating mode.
Step S43: and forwarding the request traffic to a disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster, so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue.
For a more specific processing procedure of the step S43, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Step S44: and if the main cluster message queue does not have message accumulation, directly executing the step of forwarding the request traffic to the main cluster and the disaster recovery cluster message queue corresponding to the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue.
In the embodiment of the application, if the messages are not stacked, it means that all the messages in the MQ message queue of the master cluster are consumed, and at this time, the system enters a normal operating mode, that is, the first operating mode. The request flow is forwarded to the main cluster, and the main cluster processes the request flow and feeds back data and the like, so that the request flow is processed. In this case, the traffic request is also forwarded to the disaster tolerant cluster MQ message queue. And monitoring the messages in the main cluster MQ message queue by the disaster recovery cluster MQ message queue, and adopting an asynchronous consumption mode until the main cluster MQ message queue is empty, so that the main cluster does not consume any more. And when the main cluster MQ message queue is empty, the disaster tolerance cluster consumes the messages in the disaster tolerance cluster MQ message queue according to the message sequence, and the disaster tolerance cluster does not consume any more until the message queue is empty.
Fig. 9 is a schematic view of a flow chart framework in the third operating mode in the embodiment of the present application. Requesting flow to enter a cluster, recovering the main cluster to be in an available state at the moment, further judging whether message accumulation exists in a main cluster MQ message queue, and directly changing to a normal first working mode if the message accumulation does not exist; if the message accumulation exists, the messages in the MQ message queue of the main cluster need to be consumed first, and the normal first working mode is switched to after all the consumption is completed.
In this embodiment of the present application, when the working mode is a third working mode in which the unavailable state of the primary cluster is restored to the available state, it is determined whether a message pile exists in a primary cluster message queue corresponding to the primary cluster; if the main cluster message queue has message accumulation, monitoring the main cluster message queue through the main cluster, and consuming the messages in the main cluster message queue in sequence until the main cluster message queue is empty, and stopping consumption; forwarding the request traffic to a disaster tolerant cluster message queue corresponding to the main cluster and the disaster tolerant cluster, so as to process the request traffic based on the main cluster and the disaster tolerant cluster message queue; and if the main cluster message queue does not have message accumulation, directly executing the step of forwarding the request traffic to the main cluster and the disaster recovery cluster message queue corresponding to the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue. Therefore, after the third working mode of disaster recovery, the data consistency of the main cluster and the disaster recovery cluster can be ensured, so that the normal working module can be smoothly switched to. Meanwhile, as the disaster recovery cluster uses the asynchronous consumption message, the performance requirement of the disaster recovery cluster is not high, the traffic peak clipping effect can be achieved, and the system stability can be effectively guaranteed. The problem of data writing during the unavailability period of the main cluster is solved by using the main cluster MQ message queue and the disaster recovery cluster MQ message queue under the condition that the main cluster is changed from unavailable to available in a disaster recovery working mode, so that the data consistency of the main cluster and the disaster recovery cluster is guaranteed.
Correspondingly, an embodiment of the present application further discloses a data processing apparatus, as shown in fig. 10, the apparatus includes:
a request traffic obtaining module 11, configured to obtain request traffic related to service access when a cluster provides a service to the outside;
a working mode judging module 12, configured to judge a working mode of the current cluster;
a first working mode module 13, configured to forward the request traffic to a disaster recovery cluster message queue corresponding to a main cluster and a disaster recovery cluster when the working mode is a first working mode in which both the main cluster and the disaster recovery cluster are in an available state, so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue;
a second working mode module 14, configured to, when the working mode is a second working mode in which the primary cluster is in an unavailable state and the disaster recovery cluster is in an available state, determine a request type of the requested traffic, and then determine a forwarding path of the requested traffic according to the request type, so as to process the requested traffic based on the forwarding path;
and a third working mode module 15, configured to, when the working mode is a third working mode in which the unavailable state of the primary cluster is restored to the available state, determine whether a message pile exists in a primary cluster message queue corresponding to the primary cluster, and process the request traffic according to a determination result.
For more specific working processes of the modules, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Therefore, according to the above scheme of this embodiment, first, a request traffic about service access when a cluster provides a service to the outside is obtained, and a current working mode of the cluster is determined; when the working mode is a first working mode in which both a main cluster and a disaster recovery cluster are in an available state, forwarding the request traffic to a disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue; when the working mode is a second working mode that the main cluster is in an unavailable state and the disaster recovery cluster is in an available state, judging the request type of the request flow, and then determining a forwarding path of the request flow according to the request type so as to process the request flow based on the forwarding path; and when the working mode is a third working mode which is recovered to the available state from the unavailable state of the main cluster, judging whether message accumulation exists in a main cluster message queue corresponding to the main cluster, and processing the request flow according to a judgment result. Therefore, the method and the system are mainly used for management of the main cluster and the disaster recovery cluster. Under a normal working mode, namely a first working mode that a main cluster and a disaster tolerance cluster are both in an available state, request flow is simultaneously forwarded to the main cluster and a disaster tolerance cluster message queue, the disaster tolerance cluster message queue is consumed in a disaster tolerance cluster consumption message mode, and the problem of data consistency of the main cluster and the disaster tolerance cluster is solved under the condition that the performance pressure of the disaster tolerance cluster is not increased; in a disaster recovery working mode, namely a second working mode that a main cluster is in an unavailable state and a disaster recovery cluster is in an available state, different forwarding ways of request traffic are determined according to different request types, so that the problem of how to send the request traffic to the disaster recovery cluster in the mode is solved, the request traffic can be fed back normally, and meanwhile, data are stored in the main cluster in time through a system, so that the data consistency of the main cluster and the disaster recovery cluster is realized; in the disaster recovery working mode, namely the third working mode that the unavailable state of the main cluster is recovered to the available state, the request flow is processed after judging whether the message accumulation exists in the message queue of the main cluster, and in this case, after the disaster recovery working mode, the data consistency of the main cluster and the disaster recovery cluster can be ensured, so that the normal working module can be smoothly switched. In addition, the message is processed based on the main cluster message queue and the disaster recovery cluster message queue, so that the requirement on the performance of the corresponding cluster is not high, the request flow is stored in the corresponding message queue in the period of high peak flow, the effect of peak flow reduction can be achieved, the system stability can be effectively guaranteed by monitoring the current cluster state, the seamless connection automatic switching is carried out, and the stability of external service is guaranteed.
Further, an electronic device is disclosed in the embodiments of the present application, and fig. 11 is a block diagram of an electronic device 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application.
Fig. 11 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein, the memory 22 is used for storing a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps in the data processing method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be a computer.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the memory 22 is used as a carrier for storing resources, and may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like, the resources stored thereon may include an operating system 221, a computer program 222, data 223, and the like, and the data 223 may include various data. The storage means may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device on the electronic device 20 and the computer program 222, and may be Windows Server, Netware, Unix, Linux, or the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the data processing method disclosed by any of the foregoing embodiments and executed by the electronic device 20.
Further, embodiments of the present application disclose a computer-readable storage medium, where the computer-readable storage medium includes a Random Access Memory (RAM), a Memory, a Read-Only Memory (ROM), an electrically programmable ROM, an electrically erasable programmable ROM, a register, a hard disk, a magnetic disk, or an optical disk or any other form of storage medium known in the art. Wherein the computer program realizes the aforementioned data processing method when executed by a processor. For the specific steps of the method, reference may be made to the corresponding contents disclosed in the foregoing embodiments, which are not described herein again.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a data process or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The data processing method, apparatus, device and storage medium provided by the present invention are described in detail above, and the principle and implementation of the present invention are explained herein by applying specific examples, and the description of the above examples is only used to help understanding the method and core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A data processing method, comprising:
acquiring request traffic related to service access when a cluster provides services to the outside, and judging the current working mode of the cluster;
when the working mode is a first working mode that a main cluster and a disaster recovery cluster are both in an available state, forwarding the request traffic to a disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue;
when the working mode is a second working mode that the main cluster is in an unavailable state and the disaster recovery cluster is in an available state, judging the request type of the request flow, and then determining the forwarding path of the request flow according to the request type so as to process the request flow based on the forwarding path;
and when the working mode is a third working mode which is recovered to the available state from the unavailable state of the main cluster, judging whether message accumulation exists in a main cluster message queue corresponding to the main cluster, and processing the request flow according to a judgment result.
2. The data processing method according to claim 1, wherein when the operating mode is a first operating mode in which both a primary cluster and a disaster recovery cluster are available, forwarding the request traffic to a disaster recovery cluster message queue corresponding to the primary cluster and the disaster recovery cluster, so as to process the request traffic based on the primary cluster and the disaster recovery cluster message queue, the method includes:
when the working mode is a first working mode in which both the main cluster and the disaster recovery cluster are in an available state, forwarding the request traffic to the main cluster so that the main cluster can process the request traffic;
and forwarding the request traffic to a disaster recovery cluster message queue corresponding to the disaster recovery cluster, and monitoring the disaster recovery cluster message queue through the disaster recovery cluster so that the disaster recovery cluster asynchronously consumes the disaster recovery cluster message queue.
3. The data processing method of claim 1, further comprising:
and sequentially consuming the messages of the main cluster message queue and the disaster-tolerant cluster message queue through a sequential consumption strategy in a Rockettq or kafka.
4. The data processing method according to claim 1, wherein the determining a request type of the requested traffic and then determining a forwarding path of the requested traffic according to the request type, so as to process the requested traffic based on the forwarding path, comprises:
if the request type is read request traffic, forwarding the request traffic to the disaster recovery cluster so that the disaster recovery cluster can process the request traffic;
if the request type is write request traffic, forwarding the request traffic to the main cluster message queue, and monitoring the main cluster message queue through the disaster tolerance cluster message queue, so that the main cluster consumes the main cluster message queue as empty, the disaster tolerance cluster asynchronously consumes the disaster tolerance cluster message queue, and the consumption is stopped until the disaster tolerance cluster message queue is empty.
5. The data processing method according to claim 1, wherein the determining whether a message pile exists in a primary cluster message queue corresponding to the primary cluster, and processing the request traffic according to a determination result includes:
if the main cluster message queue has message accumulation, monitoring the main cluster message queue through the main cluster, and sequentially consuming the messages in the main cluster message queue until the main cluster message queue is empty, and stopping consumption;
executing the step of forwarding the request traffic to the disaster recovery cluster message queue corresponding to the main cluster and the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue;
and if the main cluster message queue does not have message accumulation, directly executing the step of forwarding the request traffic to the main cluster and the disaster recovery cluster message queue corresponding to the disaster recovery cluster so as to process the request traffic based on the main cluster and the disaster recovery cluster message queue.
6. The data processing method according to any one of claims 1 to 5, further comprising:
correspondingly dividing the main cluster and the disaster recovery cluster into a plurality of consumer groups;
and monitoring different partitions in corresponding cluster message queues by using different consumer nodes in the consumer group, and performing batch consumption on messages in the partitions.
7. The data processing method of claim 6, wherein after the bulk consumption of the messages in the partition, further comprising:
and when the consumption of a batch of messages is finished, sending a signal ACK of the finished consumption to the server of the corresponding cluster message queue.
8. A data processing apparatus, characterized by comprising:
a request flow obtaining module, configured to obtain a request flow related to service access when a cluster provides an external service;
the working mode judging module is used for judging the working mode of the current cluster;
a first working mode module, configured to forward the request traffic to a disaster recovery cluster message queue corresponding to a primary cluster and a disaster recovery cluster when the working mode is a first working mode in which both the primary cluster and the disaster recovery cluster are in an available state, so as to process the request traffic based on the primary cluster and the disaster recovery cluster message queue;
a second working mode module, configured to, when the working mode is a second working mode in which the primary cluster is in an unavailable state and the disaster recovery cluster is in an available state, determine a request type of the requested traffic, and then determine a forwarding path of the requested traffic according to the request type, so as to process the requested traffic based on the forwarding path;
and the third working mode module is used for judging whether the message accumulation exists in a main cluster message queue corresponding to the main cluster when the working mode is the third working mode recovered from the unavailable state of the main cluster to the available state, and processing the request flow according to the judgment result.
9. An electronic device, comprising a processor and a memory; wherein the memory is for storing a computer program that is loaded and executed by the processor to implement the data processing method of any of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program; wherein the computer program when executed by a processor implements the data processing method of any one of claims 1 to 7.
CN202210696715.XA 2022-06-20 2022-06-20 Data processing method, device, equipment and storage medium Pending CN115134217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210696715.XA CN115134217A (en) 2022-06-20 2022-06-20 Data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210696715.XA CN115134217A (en) 2022-06-20 2022-06-20 Data processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115134217A true CN115134217A (en) 2022-09-30

Family

ID=83377437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210696715.XA Pending CN115134217A (en) 2022-06-20 2022-06-20 Data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115134217A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180314601A1 (en) * 2017-04-28 2018-11-01 Splunk Inc. Intelligent captain selection for disaster recovery of search head cluster
CN109451072A (en) * 2018-12-29 2019-03-08 广东电网有限责任公司 A kind of message caching system and method based on Kafka
CN111190766A (en) * 2019-12-12 2020-05-22 北京淇瑀信息科技有限公司 HBase database-based cross-machine-room cluster disaster recovery method, device and system
CN112286723A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Computer room disaster recovery control method, terminal and storage medium
CN113254274A (en) * 2021-04-21 2021-08-13 北京大米科技有限公司 Message processing method, device, storage medium and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180314601A1 (en) * 2017-04-28 2018-11-01 Splunk Inc. Intelligent captain selection for disaster recovery of search head cluster
CN109451072A (en) * 2018-12-29 2019-03-08 广东电网有限责任公司 A kind of message caching system and method based on Kafka
CN111190766A (en) * 2019-12-12 2020-05-22 北京淇瑀信息科技有限公司 HBase database-based cross-machine-room cluster disaster recovery method, device and system
CN112286723A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Computer room disaster recovery control method, terminal and storage medium
CN113254274A (en) * 2021-04-21 2021-08-13 北京大米科技有限公司 Message processing method, device, storage medium and server

Similar Documents

Publication Publication Date Title
CN110213371B (en) Message consumption method, device, equipment and computer storage medium
US10860441B2 (en) Method and system for data backup and restoration in cluster system
US8095935B2 (en) Adapting message delivery assignments with hashing and mapping techniques
US10491560B2 (en) Message delivery in messaging networks
CN101557315B (en) Method, device and system for active-standby switch
CN111338773B (en) Distributed timing task scheduling method, scheduling system and server cluster
US7870425B2 (en) De-centralized nodal failover handling
CN111917846A (en) Kafka cluster switching method, device and system, electronic equipment and readable storage medium
CN110795503A (en) Multi-cluster data synchronization method and related device of distributed storage system
US20080288812A1 (en) Cluster system and an error recovery method thereof
CN105407180A (en) Server message pushing method and device
CN103581225A (en) Distributed system node processing task method
EP2723017A1 (en) Method, apparatus and system for implementing distributed auto-incrementing counting
CN104158707A (en) Method and device of detecting and processing brain split in cluster
JP4690987B2 (en) Network data backup system and computer therefor
CN114900449B (en) Resource information management method, system and device
CN109167690A (en) A kind of restoration methods, device and the relevant device of the service of distributed system interior joint
CN110740145A (en) Message consumption method, device, storage medium and electronic equipment
CN114629825A (en) Path detection method, device and node of computing power sensing network
CN112052104A (en) Message queue management method based on multi-computer-room realization and electronic equipment
CN115134217A (en) Data processing method, device, equipment and storage medium
CN114978871B (en) Node switching method and node switching device of service system and electronic equipment
CN115964133A (en) Message management method, device, equipment and storage medium
CN109542841A (en) The method and terminal device of data snapshot are created in cluster
CN114338670A (en) Edge cloud platform and three-level cloud control platform for internet traffic with same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination