CN112015601A - Method and device for processing data of multiple data centers - Google Patents

Method and device for processing data of multiple data centers Download PDF

Info

Publication number
CN112015601A
CN112015601A CN202010776981.4A CN202010776981A CN112015601A CN 112015601 A CN112015601 A CN 112015601A CN 202010776981 A CN202010776981 A CN 202010776981A CN 112015601 A CN112015601 A CN 112015601A
Authority
CN
China
Prior art keywords
center
data
data center
service
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010776981.4A
Other languages
Chinese (zh)
Other versions
CN112015601B (en
Inventor
刘铁
高建斌
姜丰
杨燕明
王述振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN202010776981.4A priority Critical patent/CN112015601B/en
Publication of CN112015601A publication Critical patent/CN112015601A/en
Application granted granted Critical
Publication of CN112015601B publication Critical patent/CN112015601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available

Abstract

The invention discloses a method and a device for processing data of multiple data centers, wherein the method comprises the steps that a first data center acquires self center setting information and a health state at the current moment when the automatic synchronization function is confirmed to be started, when the center setting information is determined to be a main center and the health state is a normal state, notification information is sent to a second data center, response information of the second data center is received, when the response information is determined to be a main center conflict response, whether the first data center is the main center or not is determined according to share authorization and/or priority of the second data center, and if yes, a main center service is started to process the data. Each data center performs main center consensus with other data centers by setting information and health states of the center of each data center, so that one main center can acquire service data of a user, and service data synchronization is performed on other auxiliary centers, and the problem of logic consistency of two synchronous ends of each center in complex scenes such as center switching is solved.

Description

Method and device for processing data of multiple data centers
Technical Field
The invention relates to the field of data synchronization, in particular to a method and a device for processing data of multiple data centers.
Background
The multi-data center architecture is a common architecture for large-scale application at present, and a multi-center system is arranged in many places to prevent the problem of high availability of the system due to invalidity such as natural disasters.
Currently, the current practice is. When the data synchronization is carried out by multiple data centers, the synchronization function is mainly realized by implanting multiple center data synchronization logic in an application program. The sending end sends the data to be sent to the receiving end through the synchronous request, and the receiving end processes different synchronous requests according to different business requirements after receiving the data. The problem with this approach is that the sender and receiver use separate processing approaches and lack uniform processing capabilities for the synchronization function.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing data of multiple data centers, which are used for ensuring logic consistency of each data center in complex scenes such as center switching.
In a first aspect, an embodiment of the present invention provides a method for processing data in multiple data centers, including:
the method comprises the steps that a first data center acquires self center setting information and a health state at the current moment when the starting of an automatic synchronization function is confirmed;
when the first data center determines that the center setting information is a main center and the health state is a normal state, sending notification information to a second data center, wherein the notification information comprises that the first data center is the main center; the second data center is any one of multiple data centers except the first data center;
and the first data center receives the response information of the second data center, determines whether the first data center is the main center according to the share authorization and/or the priority of the second data center when the response information is determined to be the main center conflict response, and starts the main center service to process data if the first data center is the main center conflict response.
In the technical scheme, each data center performs main center consensus with other data centers by setting the information and the health state of the center of each data center, so that the service data of a user is ensured to be acquired by one main center, and the service data synchronization is performed to other auxiliary centers, thereby solving the problem of logic consistency of two synchronous ends of each center in complex scenes such as center switching.
Optionally, the determining whether the first data center is a master center according to the share authorization and/or the priority of the second data center includes:
if the share authorization of the second data center is larger than that of the first data center, the first data center determines that the first data center is a secondary center;
if the share authorization of the second data center is smaller than the share authorization of the first data center, the first data center determines that the first data center is a main center;
if the share authorization of the second data center is the same as the share authorization of the second data center, the first data center determines whether the priority of the second data center is smaller than that of the second data center, if so, the first data center is determined to be a main center, and otherwise, the first data center is determined to be a secondary center.
In the technical scheme, the data center can be determined as the main center by comparing the share authorization and the priority of each data center.
Optionally, the share authorization of each data center is determined by the health status of the subsystems associated with each data center and the duration of the service provided by each data center.
Optionally, the method further includes:
and when the first data center determines that the center setting information is the main center and the health state is the downtime state or the center setting information is the auxiliary center, setting the center setting information of the first data center as the auxiliary center, and starting the auxiliary center service to perform data processing.
Optionally, the method further includes:
and when the first data center determines that the response information is a confirmation response, starting a main center service to perform data processing.
Optionally, the method further includes:
and if the first data center is determined to be the secondary center according to the share authorization and/or the priority of the second data center, the first data center sets the center setting information of the first data center as the secondary center, and starts the secondary center service for data processing.
Optionally, after the first data center enables the secondary center service, the method further includes:
the first data center receives a synchronization task of a third data center, wherein the synchronization task comprises synchronization data of each service customized by a user; the third data center is a main center;
and the first data center synchronizes the synchronous data and feeds back a synchronous result to the third data center.
Optionally, after the first data center enables the master center service to perform data processing, the method further includes:
the first data center acquires the synchronous data of each service customized by a user;
the first data center sends the customized synchronous data of each service to the second data center; enabling the second data center to execute a synchronization task according to the customized synchronization data of each service;
and the first data center receives and confirms the synchronization result of the second data center.
Optionally, the method further includes:
and when the first data center determines that the synchronization result of the second data center is synchronization failure, the first data center resends the customized synchronization data of each service to the second data center until the number of times of synchronization failure fed back by the second data center is determined to exceed a preset threshold or manual intervention.
In a second aspect, an embodiment of the present invention provides an apparatus for processing multiple data centers, including:
the acquisition unit is used for acquiring the self central setting information and the health state at the current moment when the starting of the automatic synchronization function is confirmed;
the processing unit is used for sending notification information to a second data center when the center setting information is determined to be a main center and the health state is determined to be a normal state, wherein the notification information comprises that a first data center is the main center; the second data center is any one of multiple data centers except the first data center; and receiving response information of the second data center, determining whether the first data center is a main center or not according to share authorization and/or priority of the second data center when the response information is determined to be a main center conflict response, and if so, starting a main center service to perform data processing.
Optionally, the processing unit is specifically configured to:
if the share authorization of the second data center is larger than that of the first data center, determining that the first data center is a secondary center;
if the share authorization of the second data center is smaller than the share authorization of the first data center, determining the first data center as a main center;
if the share authorization of the second data center is the same as the share authorization of the second data center, determining whether the priority of the second data center is smaller than the priority of the second data center, if so, determining that the first data center is a main center, otherwise, determining that the first data center is an auxiliary center.
Optionally, the share authorization of each data center is determined by the health status of the subsystems associated with each data center and the duration of the service provided by each data center.
Optionally, the processing unit is further configured to:
and when the center setting information is determined to be the main center and the health state is the downtime state or the center setting information is determined to be the auxiliary center, setting the center setting information of the first data center as the auxiliary center, and starting the auxiliary center service to perform data processing.
Optionally, the processing unit is further configured to:
and when the response information is determined to be the confirmation response, starting the main center service to perform data processing.
Optionally, the processing unit is further configured to:
and if the first data center is determined to be the secondary center according to the share authorization and/or the priority of the second data center, setting the center setting information of the first data center as the secondary center, and starting the secondary center service to perform data processing.
Optionally, the processing unit is further configured to:
after the auxiliary center service is started, receiving a synchronization task of a third data center, wherein the synchronization task comprises synchronization data of each service customized by a user; the third data center is a main center;
and synchronizing the synchronous data and feeding back a synchronous result to the third data center.
Optionally, the processing unit is further configured to:
after the main center service is started for data processing, acquiring the synchronous data of each service customized by a user;
sending the customized synchronous data of each service to the second data center; enabling the second data center to execute a synchronization task according to the customized synchronization data of each service;
and receiving and confirming the synchronization result of the second data center.
Optionally, the processing unit is further configured to:
and when the synchronization result of the second data center is determined to be synchronization failure, retransmitting the customized synchronization data of each service to the second data center until the number of times of synchronization failure determined to be fed back by the second data center exceeds a preset threshold or manual intervention.
In a third aspect, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the method for processing the data of the multiple data centers according to the obtained program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable non-volatile storage medium, which includes computer-readable instructions, and when the computer-readable instructions are read and executed by a computer, the computer is caused to execute the above-mentioned method for processing data in multiple data centers.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a data center according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a data packet format according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for multi-data center data processing according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a method for multi-data center data processing according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for processing multiple data centers according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a system architecture provided in an embodiment of the present invention. As shown in fig. 1, the system architecture may include n data centers 100. The n data centers 100 may communicate with each other through a network.
Each data center 100 may have a structure as shown in fig. 2, and includes a multi-center link management module 101, a multi-center state management module 102, a multi-center synchronous spinning module 103, a service data customization processing module 104, and a synchronous task monitoring and redo module 105.
The multi-center link management module 101: and maintaining a connection link among the centers, sending a synchronization request of the main center to the auxiliary center and feeding back a processing result of the auxiliary center to the main center.
The multi-center state management module 102: the center state of each center is maintained, in a multi-center architecture, any fixed service only has one main center, and other centers are all auxiliary centers. The module is responsible for maintaining the uniqueness of the main center, and ensuring the uniqueness of the main center and the consensus of each center on the main center in scenes such as downtime, drilling and the like.
The multi-center synchronous spin module 103 is a core module of each data center. Each data center is provided with the same multi-center synchronous spinning module, and the modules take the judgment results of the multi-center state management module 102 on the main center and the auxiliary center as input, and automatically select and switch the duties of the data centers in multi-center synchronization.
The service data customizing and processing module 104 is responsible for organizing and constructing service data to be synchronized when the primary center receives a synchronization trigger request, and differentially completing data synchronization processes of different services after the secondary center receives a synchronization request. The design of the independent module solves the problem of expansion of service data and processing on synchronous service functions, namely, different services can complete corresponding data synchronization only by customizing respective personalized data synchronization content and processing modes in the module. The format of the data packet synchronized by each data center may be as shown in fig. 3.
Suppose there are four centers in Shanghai, Beijing, Huangshan and Guizhou. The four centers are all in working states, Shanghai is the main center, and Beijing, Huangshan and Guizhou are the subsidiary centers. At this time, the operation and maintenance personnel execute the business operation A in the Shanghai center, and then the data of the function A is packaged and sent to each secondary center through the business data customizing module. And the sub-center analyzes and synchronizes the data packet sent by the main center according to the requirement of the module A after receiving the synchronization request.
The synchronization task monitoring and redoing module 105 is responsible for monitoring the synchronization tasks of the master center and redoing the tasks failing in synchronization in time to ensure the availability of multiple centers. And is also responsible for maintaining the ineffective tasks generated in the central switching process to make judgment and treatment.
It should be noted that the structures shown in fig. 1, fig. 2, and fig. 3 are merely examples, and the present invention is not limited thereto.
Based on the above description, fig. 4 shows in detail a flow of a method for multiple data center data processing according to an embodiment of the present invention, where the flow may be performed by an apparatus for multiple data center data processing.
As shown in fig. 4, the process specifically includes:
step 401, the first data center acquires the center setting information and the health status of the first data center at the current moment when the automatic synchronization function is confirmed to be enabled.
Each data center is provided with center setting information and a health state, the first data center is the data center which starts the automatic synchronization function at present, and when the automatic synchronization function is started by the first data center, local setting can be scanned to acquire the center setting information and the health state of the first data center at the present moment. The center setting information may include two kinds of information, namely, a main center and an auxiliary center, and the health status may include a normal status and a downtime status.
Step 402, when the first data center determines that the center setting information is the main center and the health state is the normal state, the first data center sends notification information to a second data center.
When the center setting information is determined to be the main center and the health state is determined to be the normal state, the notification information can be sent to the second data center. The notification information includes information that the first data center is a master center. And the first data center is used for informing the second data center that the first data center is the main center at the current moment. And the second data center is any of the multiple data centers other than the first data center.
Step 403, the first data center receives the response information of the second data center, and when it is determined that the response information is a main center conflict response, determines whether the first data center is a main center according to the share authorization and/or the priority of the second data center, and if so, enables a main center service to perform data processing.
After the second data center receives the notification information of the first data center, the consensus confirmation is carried out, if the second data center is a secondary center, the sent response information is a confirmation response, and if the second data center is a primary center, the sent response information is a primary center collision response. Therefore, it is possible to determine whether there is a problem of the master center conflict at present by judging the response information.
When the first data center determines that the response information is a confirmation response, the main center service can be directly started to perform data processing.
When the first data center determines that the response information is a conflict response of the main center, it needs to determine whether the first data center is the main center according to the share authorization and/or the priority of the second data center, and specifically, the determination may be performed in the following manners:
in a first mode
And if the share authorization of the second data center is larger than that of the first data center, the first data center determines that the first data center is the secondary center.
Mode two
And if the share authorization of the second data center is smaller than that of the first data center, the first data center determines that the first data center is the main center.
Mode III
If the share authorization of the second data center is the same as the share authorization of the second data center, the first data center determines whether the priority of the second data center is smaller than that of the second data center, if so, the first data center is determined to be a main center, and otherwise, the first data center is determined to be a secondary center.
It should be noted that, in the embodiment of the present invention, the share authorization of each data center is determined by the health status of the subsystems associated with each data center and the duration of the service provided by each data center.
For example, when a data center is forcibly set as a master center through a human interactive interface, the share authorization of the data center is forcibly set to 100%.
When the related subsystem of the data center breaks down, the share authorization is reduced by 50 percent; instead, the associated subsystem recovers the share authorization boost of 50%.
It may also be determined by multiplying the ratio of the total duration service time to the duration service time by a threshold. The threshold may be set empirically, and may be 50%, for example. For example, if the service duration of the 4 centers is 10, 20, 30, and 40, the equity shares obtained by the 4 centers are 5%, 10%, 15%, and 20%, respectively.
In addition, when the first data center determines that the center setting information is the main center and the health state is the downtime state or the center setting information is the sub-center, the center setting information of the first data center may be set as the sub-center, and the sub-center service is started to perform data processing.
When the first data center is determined to be the secondary center according to the share authorization and/or the priority of the second data center, the center setting information of the first data center can also be set as the secondary center, and the secondary center service is started to perform data processing.
After initiating the secondary center service, the first data center may receive the synchronization tasks of a third data center, which is the primary center. And then synchronizing the synchronous data and feeding back a synchronous result to a third data center.
After the main center service is started, the first data center can acquire the synchronous data of each service customized by a user, and then sends the customized synchronous data of each service to the second data center; so that the second data center executes the synchronization task according to the customized synchronization data of each service. And finally, receiving and confirming the synchronization result of the second data center.
It should be noted that, in the embodiment of the present invention, a synchronization task monitoring and redoing mechanism further exists, and when it is determined that the synchronization result of the second data center is synchronization failure, the first data center resends the customized synchronization data of each service to the second data center until it is determined that the number of times of synchronization failure of the synchronization result fed back by the second data center exceeds a preset threshold or manual intervention is performed. The preset threshold may be set empirically. The manual intervention means that whether the synchronization task is finished or not is determined by a worker, namely, an instruction of stopping the synchronization task is received.
In order to better explain the flow of the multi-data center data processing provided by the embodiment of the present invention, the workflow of the multi-data center data processing will be described in a specific implementation scenario.
In connection with the structure of the data center shown in fig. 2, the workflow shown in fig. 5: it is evident that the entire workflow performs different workflows with the middle black dashed line as axis switching due to the design on the spinning structure.
The multi-center state management module 102 confirms the states of the data center and other data centers and agrees on the states of all the data centers; according to the state of the data center, confirming the role of the data center in the synchronization of multiple data centers, namely as a main center or as an auxiliary center; when the current data center is judged as the main center, the subsequent steps are executed 3, 4 and 8, the synchronous data of each service is customized after the triggering is received, the synchronous data are sent to the auxiliary center, and finally the synchronous results of the multiple data centers are summarized and confirmed. When the current data center is judged as the secondary center, the subsequent steps of 5, 6 and 7 are executed, namely, a synchronization request is received, the processing is started, and the processing result is fed back. In addition, the synchronous task monitoring and redo module 105 is independent of the two processes, and is responsible for reinitiating and continuing to monitor the execution of the failed synchronous task until the number limit is exceeded or manual intervention is performed when the failed synchronous task is found.
In the embodiment of the invention, a first data center acquires the center setting information and the health state of the first data center at the current moment when the automatic synchronization function is confirmed to be started, sends notification information to a second data center when the center setting information is determined to be a main center and the health state is a normal state, receives response information of the second data center, determines whether the first data center is the main center or not according to share authorization and/or priority of the second data center when the response information is determined to be a collision response of the main center, and starts the service of the main center to process data if the first data center is the main center. Each data center performs main center consensus with other data centers by setting information and health states of the center of each data center, so that one main center can acquire service data of a user, and service data synchronization is performed on other auxiliary centers, and the problem of logic consistency of two synchronous ends of each center in complex scenes such as center switching is solved.
Based on the same technical concept, fig. 6 exemplarily illustrates a structure of an apparatus for multiple data center data processing, which can perform a flow of multiple data center data processing according to an embodiment of the present invention.
As shown in fig. 6, the apparatus specifically includes:
an obtaining unit 601, configured to obtain the center setting information and the health status of the current time when the automatic synchronization function is confirmed to be enabled;
the processing unit 602 is configured to send notification information to a second data center when it is determined that the center setting information is a main center and the health status is a normal status, where the notification information includes that the first data center is the main center; the second data center is any one of multiple data centers except the first data center; and receiving response information of the second data center, determining whether the first data center is a main center or not according to share authorization and/or priority of the second data center when the response information is determined to be a main center conflict response, and if so, starting a main center service to perform data processing.
Optionally, the processing unit 602 is specifically configured to:
if the share authorization of the second data center is larger than that of the first data center, determining that the first data center is a secondary center;
if the share authorization of the second data center is smaller than the share authorization of the first data center, determining the first data center as a main center;
if the share authorization of the second data center is the same as the share authorization of the second data center, determining whether the priority of the second data center is smaller than the priority of the second data center, if so, determining that the first data center is a main center, otherwise, determining that the first data center is an auxiliary center.
Optionally, the share authorization of each data center is determined by the health status of the subsystems associated with each data center and the duration of the service provided by each data center.
Optionally, the processing unit 602 is further configured to:
and when the center setting information is determined to be the main center and the health state is the downtime state or the center setting information is determined to be the auxiliary center, setting the center setting information of the first data center as the auxiliary center, and starting the auxiliary center service to perform data processing.
Optionally, the processing unit 602 is further configured to:
and when the response information is determined to be the confirmation response, starting the main center service to perform data processing.
Optionally, the processing unit 602 is further configured to:
and if the first data center is determined to be the secondary center according to the share authorization and/or the priority of the second data center, setting the center setting information of the first data center as the secondary center, and starting the secondary center service to perform data processing.
Optionally, the processing unit 602 is further configured to:
after the auxiliary center service is started, receiving a synchronization task of a third data center, wherein the synchronization task comprises synchronization data of each service customized by a user; the third data center is a main center;
and synchronizing the synchronous data and feeding back a synchronous result to the third data center.
Optionally, the processing unit 602 is further configured to:
after the main center service is started for data processing, acquiring the synchronous data of each service customized by a user;
sending the customized synchronous data of each service to the second data center; enabling the second data center to execute a synchronization task according to the customized synchronization data of each service;
and receiving and confirming the synchronization result of the second data center.
Optionally, the processing unit 602 is further configured to:
and when the synchronization result of the second data center is determined to be synchronization failure, retransmitting the customized synchronization data of each service to the second data center until the number of times of synchronization failure determined to be fed back by the second data center exceeds a preset threshold or manual intervention.
Based on the same technical concept, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the method for processing the data of the multiple data centers according to the obtained program.
Based on the same technical concept, embodiments of the present invention also provide a computer-readable non-volatile storage medium, which includes computer-readable instructions, and when the computer-readable instructions are read and executed by a computer, the computer is enabled to execute the method for processing data in multiple data centers.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (20)

1. A method for data processing in multiple data centers, comprising:
the method comprises the steps that a first data center acquires self center setting information and a health state at the current moment when the starting of an automatic synchronization function is confirmed;
when the first data center determines that the center setting information is a main center and the health state is a normal state, sending notification information to a second data center, wherein the notification information comprises that the first data center is the main center; the second data center is any one of multiple data centers except the first data center;
and the first data center receives the response information of the second data center, determines whether the first data center is the main center according to the share authorization and/or the priority of the second data center when the response information is determined to be the main center conflict response, and starts the main center service to process data if the first data center is the main center conflict response.
2. The method of claim 1, wherein the determining whether the first data center is a primary center based on the grant of shares and/or priority of the second data center comprises:
if the share authorization of the second data center is larger than that of the first data center, the first data center determines that the first data center is a secondary center;
if the share authorization of the second data center is smaller than the share authorization of the first data center, the first data center determines that the first data center is a main center;
if the share authorization of the second data center is the same as the share authorization of the second data center, the first data center determines whether the priority of the second data center is smaller than that of the second data center, if so, the first data center is determined to be a main center, and otherwise, the first data center is determined to be a secondary center.
3. The method of claim 1, wherein the grant of shares to each data center is determined by a health status of a subsystem associated with each data center and a duration of time for which each data center is to provide service.
4. The method of claim 1, wherein the method further comprises:
and when the first data center determines that the center setting information is the main center and the health state is the downtime state or the center setting information is the auxiliary center, setting the center setting information of the first data center as the auxiliary center, and starting the auxiliary center service to perform data processing.
5. The method of claim 1, wherein the method further comprises:
and when the first data center determines that the response information is a confirmation response, starting a main center service to perform data processing.
6. The method of claim 1, wherein the method further comprises:
and if the first data center is determined to be the secondary center according to the share authorization and/or the priority of the second data center, the first data center sets the center setting information of the first data center as the secondary center, and starts the secondary center service for data processing.
7. The method of claim 6, after the first data center enables secondary center service, further comprising:
the first data center receives a synchronization task of a third data center, wherein the synchronization task comprises synchronization data of each service customized by a user; the third data center is a main center;
and the first data center synchronizes the synchronous data and feeds back a synchronous result to the third data center.
8. The method of any of claims 1 to 7, after the first data center enables a master center service for data processing, further comprising:
the first data center acquires the synchronous data of each service customized by a user;
the first data center sends the customized synchronous data of each service to the second data center; enabling the second data center to execute a synchronization task according to the customized synchronization data of each service;
and the first data center receives and confirms the synchronization result of the second data center.
9. The method of claim 8, wherein the method further comprises:
and when the first data center determines that the synchronization result of the second data center is synchronization failure, the first data center resends the customized synchronization data of each service to the second data center until the number of times of synchronization failure fed back by the second data center is determined to exceed a preset threshold or manual intervention.
10. An apparatus for multi-datacenter data processing, comprising:
the acquisition unit is used for acquiring the self central setting information and the health state at the current moment when the starting of the automatic synchronization function is confirmed;
the processing unit is used for sending notification information to a second data center when the center setting information is determined to be a main center and the health state is determined to be a normal state, wherein the notification information comprises that a first data center is the main center; the second data center is any one of multiple data centers except the first data center; and receiving response information of the second data center, determining whether the first data center is a main center or not according to share authorization and/or priority of the second data center when the response information is determined to be a main center conflict response, and if so, starting a main center service to perform data processing.
11. The apparatus as claimed in claim 10, wherein said processing unit is specifically configured to:
if the share authorization of the second data center is larger than that of the first data center, determining that the first data center is a secondary center;
if the share authorization of the second data center is smaller than the share authorization of the first data center, determining the first data center as a main center;
if the share authorization of the second data center is the same as the share authorization of the second data center, determining whether the priority of the second data center is smaller than the priority of the second data center, if so, determining that the first data center is a main center, otherwise, determining that the first data center is an auxiliary center.
12. The apparatus of claim 10, wherein the grant of shares to each data center is determined by a health status of a subsystem associated with each data center and a duration of time for which each data center is to provide service.
13. The apparatus as recited in claim 10, said processing unit to further:
and when the center setting information is determined to be the main center and the health state is the downtime state or the center setting information is determined to be the auxiliary center, setting the center setting information of the first data center as the auxiliary center, and starting the auxiliary center service to perform data processing.
14. The apparatus as recited in claim 10, said processing unit to further:
and when the response information is determined to be the confirmation response, starting the main center service to perform data processing.
15. The apparatus as recited in claim 10, said processing unit to further:
and if the first data center is determined to be the secondary center according to the share authorization and/or the priority of the second data center, setting the center setting information of the first data center as the secondary center, and starting the secondary center service to perform data processing.
16. The apparatus as recited in claim 15, said processing unit to further:
after the auxiliary center service is started, receiving a synchronization task of a third data center, wherein the synchronization task comprises synchronization data of each service customized by a user; the third data center is a main center;
and synchronizing the synchronous data and feeding back a synchronous result to the third data center.
17. The apparatus of any of claims 10 to 16, wherein the processing unit is further to:
after the main center service is started for data processing, acquiring the synchronous data of each service customized by a user;
sending the customized synchronous data of each service to the second data center; enabling the second data center to execute a synchronization task according to the customized synchronization data of each service;
and receiving and confirming the synchronization result of the second data center.
18. The apparatus as recited in claim 17, said processing unit to further:
and when the synchronization result of the second data center is determined to be synchronization failure, retransmitting the customized synchronization data of each service to the second data center until the number of times of synchronization failure determined to be fed back by the second data center exceeds a preset threshold or manual intervention.
19. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to perform the method of any of claims 1 to 9 in accordance with the obtained program.
20. A computer-readable non-transitory storage medium including computer-readable instructions which, when read and executed by a computer, cause the computer to perform the method of any one of claims 1 to 9.
CN202010776981.4A 2020-08-05 2020-08-05 Method and device for processing data of multiple data centers Active CN112015601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010776981.4A CN112015601B (en) 2020-08-05 2020-08-05 Method and device for processing data of multiple data centers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010776981.4A CN112015601B (en) 2020-08-05 2020-08-05 Method and device for processing data of multiple data centers

Publications (2)

Publication Number Publication Date
CN112015601A true CN112015601A (en) 2020-12-01
CN112015601B CN112015601B (en) 2023-08-08

Family

ID=73499214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010776981.4A Active CN112015601B (en) 2020-08-05 2020-08-05 Method and device for processing data of multiple data centers

Country Status (1)

Country Link
CN (1) CN112015601B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100235431A1 (en) * 2009-03-16 2010-09-16 Microsoft Corporation Datacenter synchronization
US20140067983A1 (en) * 2012-08-29 2014-03-06 International Business Machines Corporation Bi-directional synchronization enabling active-active redundancy for load-balancing switches
CN104794028A (en) * 2014-01-16 2015-07-22 中国移动通信集团浙江有限公司 Disaster tolerance processing method and device, main data center and backup data center
US20160077935A1 (en) * 2014-09-12 2016-03-17 Vmware, Inc. Virtual machine network loss detection and recovery for high availability
US20160092324A1 (en) * 2014-09-26 2016-03-31 Linkedin Corporation Fast single-master failover
CN109560903A (en) * 2019-02-14 2019-04-02 湖南智领通信科技有限公司 A kind of vehicle-mounted command communications system of complete disaster tolerance
CN109995554A (en) * 2017-12-29 2019-07-09 中国移动通信集团吉林有限公司 The control method and cloud dispatch control device of multi-stage data center active-standby switch
CN110557413A (en) * 2018-05-30 2019-12-10 中国人民财产保险股份有限公司 Business service system and method for providing business service
CN111147567A (en) * 2019-12-23 2020-05-12 中国银联股份有限公司 Service calling method, device, equipment and medium
CN111459724A (en) * 2020-03-06 2020-07-28 中国人民财产保险股份有限公司 Node switching method, device, equipment and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100235431A1 (en) * 2009-03-16 2010-09-16 Microsoft Corporation Datacenter synchronization
US20140067983A1 (en) * 2012-08-29 2014-03-06 International Business Machines Corporation Bi-directional synchronization enabling active-active redundancy for load-balancing switches
CN104794028A (en) * 2014-01-16 2015-07-22 中国移动通信集团浙江有限公司 Disaster tolerance processing method and device, main data center and backup data center
US20160077935A1 (en) * 2014-09-12 2016-03-17 Vmware, Inc. Virtual machine network loss detection and recovery for high availability
US20160092324A1 (en) * 2014-09-26 2016-03-31 Linkedin Corporation Fast single-master failover
CN109995554A (en) * 2017-12-29 2019-07-09 中国移动通信集团吉林有限公司 The control method and cloud dispatch control device of multi-stage data center active-standby switch
CN110557413A (en) * 2018-05-30 2019-12-10 中国人民财产保险股份有限公司 Business service system and method for providing business service
CN109560903A (en) * 2019-02-14 2019-04-02 湖南智领通信科技有限公司 A kind of vehicle-mounted command communications system of complete disaster tolerance
CN111147567A (en) * 2019-12-23 2020-05-12 中国银联股份有限公司 Service calling method, device, equipment and medium
CN111459724A (en) * 2020-03-06 2020-07-28 中国人民财产保险股份有限公司 Node switching method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112015601B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN111782360A (en) Distributed task scheduling method and device
CN102567438A (en) Method for providing access to data items from a distributed storage system
CN106789197A (en) A kind of cluster election method and system
CN111209110B (en) Task scheduling management method, system and storage medium for realizing load balancing
CN105701159A (en) Data synchronization device and method
CN112910937B (en) Object scheduling method and device in container cluster, server and container cluster
CN112272291A (en) Video storage method, device, management equipment and readable storage medium
CN107948063B (en) Method for establishing aggregation link and access equipment
CN106230622A (en) A kind of cluster implementation method and device
CN106412088B (en) A kind of method of data synchronization and terminal
JP3197279B2 (en) Business takeover system
CN114554593A (en) Data processing method and device
CN110958139B (en) Network control method, orchestrator, controller, and computer-readable storage medium
CN112015601A (en) Method and device for processing data of multiple data centers
CN111881018A (en) Automatic test dynamic scheduling system
CN115616678B (en) Method and device for synchronizing correction of operation parameters of security inspection system
CN107968718A (en) A kind of method, apparatus and equipment for confirming standby usage state
CN117480067A (en) Electric vehicle charge management and client device
CN114726711A (en) Method and system for cooperative processing service between devices
CN115640096A (en) Application management method and device based on kubernets and storage medium
CN110569115B (en) Multi-point deployment process management method and process competing method
CN113268365A (en) Method, device, equipment and storage medium for realizing delay message in distributed system
CN111190707B (en) Data processing method and device
CN109634787B (en) Distributed file system monitor switching method, device, equipment and storage medium
JPH06119182A (en) Information communication network system with down-load control function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant