CA3162740A1 - Traffic switching methods and devices based on multiple active data centers - Google Patents

Traffic switching methods and devices based on multiple active data centers

Info

Publication number
CA3162740A1
CA3162740A1 CA3162740A CA3162740A CA3162740A1 CA 3162740 A1 CA3162740 A1 CA 3162740A1 CA 3162740 A CA3162740 A CA 3162740A CA 3162740 A CA3162740 A CA 3162740A CA 3162740 A1 CA3162740 A1 CA 3162740A1
Authority
CA
Canada
Prior art keywords
task
traffic
configuration information
application server
data center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3162740A
Other languages
French (fr)
Inventor
Yao Ge
Tao Yang
Wei Ge
Xin Wang
Renshan LIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
10353744 Canada Ltd
Original Assignee
10353744 Canada Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 10353744 Canada Ltd filed Critical 10353744 Canada Ltd
Publication of CA3162740A1 publication Critical patent/CA3162740A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2035Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant without idle spare hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/203Failover techniques using migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A multi-active data center-based traffic switching method and device. The method comprises: an application server executes an operation of obtaining traffic configuration information after receiving a task scheduling instruction (S41), the traffic configuration information being generated by a multi-active switching platform according to a preset rule when the multi-active switching platform determines that a data transmission fault occurs in a data center according to data transmission state information of each data center, the multi-active data center being provided with at least two data centers, and the traffic configuration information being used for indicating traffic distribution corresponding to each data center; the application server analyzes the traffic configuration information so as to obtain traffic distribution corresponding to the data center (S42); the application server determines, according to the traffic distribution and type information of a current task to be processed, whether the application server has the processing permission of the current task to be processed (S43); and if yes, the application server loads the task to perform task processing (S44). According to the method, automatic traffic switching when a fault occurs to a multi-active data center is realized.

Description

TRAFFIC SWITCHING METHODS AND DEVICES BASED ON MULTIPLE ACTIVE
DATA CENTERS
BACKGROUND OF THE INVENTION
Technical Field [0001] The present application relates to the field of data processing, and more particularly to traffic switching methods and devices based on multiple active data centers.
Description of Related Art
[0002] The disaster recovery system is a system supplied to the computer information system to deal with various data disasters. The disaster recovery system ensures safety of user data when the computer system is suffering such irresistible natural disasters as fire, flood, earthquake, and war, etc., and such manmade disasters as computer crime, computer virus, electric power shutdown, network/communication failure, hardware/software errors and human operational errors, etc., to cause such problems as data transmission interruption and loss of data, and so on.
[0003] At present, disaster recovery mostly employs the active standby mode, that is, a disaster recovery backup center is established far from the location where the computer system is run. The disaster recovery backup center does not bear any online business traffic, as it only periodically backs up data of the computer system and disposes the backed-up data in the disaster recovery backup center. When a disaster occurs to cause system paralysis, running of the system is then restored at the disaster recovery backup center through the backed-up data.
[0004] Because the disaster recovery backup center does not bear the actual online business Date Recue/Date Received 2022-05-24 traffic, should a disaster occur, we would not ascertain that the backup center is useable;
moreover, since the backup system should be manually started, higher requirement is put on the system maintenance personnel, and the manual start is far from being quick in response to the disaster. The period of such delay would further make it impossible to record various data during machine shutdown.
[0005] To address the deficiencies of the active standby mode, the multi-active strategy comes to the fore as a novel technique to address problems concerning disaster recovery. The so-called multi-active means in fact the setup of identical databases at plural sites (computer rooms distanced relatively far from one another) to bear the business traffic at the same time, and can decide how to share the traffic among the sites according to business attributes such as user IDs and regions, etc., for instance, data processing requests of users from ID1 to ID49 are assigned to the first site for processing, and data processing requests of users from ID50 to ID99 are assigned to the second site for processing. When failure occurs to the first site, these business processing requests can be switched to the second site relatively quickly (at the minute level) and smoothly, under ideal circumstances, damage to the business is extremely small. Relative to the active standby mode, each site in the multi-active strategy possesses in real time the capability to bear the business traffic, and stability thereof is reliable.
[0006] Of course, the above traffic switching does not necessarily occur only during failure of data centers, as traffic switching is sometimes carried out also based on other circumstances, for example, the quantity of tasks is greatly increased at a certain data center during a special time period, it is then required to assign out a portion of the tasks.
[0007] At present, when it is required to switch traffic under the multi-active strategy, for example when failure occurs to one of the sites, once the failure message is notified to the maintenance personnel, the maintenance personnel configures traffic switching information, starts the traffic switching process, and switches the traffic between sites.

Date Recue/Date Received 2022-05-24
[0008] However, manual configuration necessitates the consumption of some time, although more quickness is achieved relative to the active standby mode, the delay by this period of time is sufficient enough for the system such as an e-commerce platform under many scenarios to produce great volume of data, and these data cannot be stored and restored.
SUMMARY OF THE INVENTION
[0009] The present application provides traffic switching methods and devices based on multiple active data centers, so as to solve prior-art problems in which delay still exists in traffic switching of multiple active data centers, and data is lost during the time of delay.
[0010] The present application provides the following solutions.
[0011] In the first aspect, there is provided a traffic switching method based on multiple active data centers, and the method comprises:
[0012] performing an operation to obtain traffic configuration information after an application server has received a task scheduling instruction, wherein the traffic configuration information is generated according to a preset rule when a multi-active switching platform judges according to data transmission status information of various data centers that traffic switching is required by a data center, the multiple active data centers include at least two data centers, and the traffic configuration information is employed to indicate traffic distribution to each data center;
[0013] parsing the traffic configuration information by the application server, and obtaining traffic distribution to which the data center in which the application server resides corresponds;
[0014] judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed; and Date Recue/Date Received 2022-05-24
[0015] if yes, loading the task by the application server to process the task.
[0016] Preferably, the application server obtains the traffic configuration information through the following steps:
[0017] reading cache by the application server and judging whether the traffic configuration information is present in the cache;
[0018] if not, reading the traffic configuration information from the multi-active switching platform by the application server.
[0019] Preferably, the method further comprises:
[0020] reading, when the application server monitors that change occurs to the traffic configuration information of the multi-active switching platform, the changed traffic configuration information and synchronizing the changed traffic configuration information into the cache.
[0021] Preferably, the step of judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed includes:
[0022] judging, if the application server judges that the task to be currently processed is an exclusive task, whether the traffic distribution to which the data center in which the application server resides corresponds is empty;
[0023] if not, the application server possesses the permission to process the task to be currently processed.
[0024] Preferably, the traffic distribution includes a set of sub-library numbers with read-write permission to which each data center corresponds, and the step of judging whether the traffic distribution to which the data center in which the application server resides corresponds is empty includes:
[0025] judging whether the set of sub-library numbers with read-write permission to which the Date Recue/Date Received 2022-05-24 data center in which the application server resides corresponds is empty.
[0026] Preferably, the multiple active data centers have a master data center, the traffic configuration information further includes identification of the master data center; and the step of judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed includes:
[0027] judging, if the application server judges that the task to be currently processed is a competitive task, whether a data center identification to which the application server corresponds is identical with the master data center identification;
[0028] if yes, the application server possesses the permission to process the task to be currently processed.
[0029] Preferably, the traffic distribution includes a set of sub-library numbers with read-write permission to which each data center corresponds, and the step of loading the task by the application server to process the task includes:
[0030] searching by the application server for the task to be currently processed from a task queue of the cache, if the task is enquired out, judging whether the application server has a permission to process the task to be currently processed according to a sub-library number to which the task to be currently processed corresponds and according to a sub-library number with read-write permission of the data center in which the application server resides;
[0031] if yes, determining by the application server a status of the sub-library number to which the task to be currently processed corresponds as being processed and storing the same in task configuration information;
[0032] changing, if the task is processed to completion, by the application server the status of the sub-library number to which the task to be currently processed corresponds to to be processed and storing the same in the task configuration information.
Date Recue/Date Received 2022-05-24
[0033] In the second aspect, there is provided a traffic switching method based on multiple active data centers, and the method comprises:
[0034] obtaining data transmission status information of various data centers by a multi-active switching platform, wherein the multiple active data centers include at least two data centers; and
[0035] judging by the multi-active switching platform according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required, so that an application server obtains the traffic configuration information and loads a task in conjunction with task configuration information as obtained to process the task after having received a task scheduling instruction, wherein the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds.
[0036] Preferably, the step of judging by the multi-active switching platform according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required includes:
[0037] performing traffic distribution, when the multi-active switching platform judges according to the status information that data transmission failure occurs to any data center, according to current traffic of data center(s) to which no failure occurs, a traffic threshold, and a rule to distribute traffic to which a competitive task corresponds to the same and single data center, to generate traffic configuration information that contains traffic distributions to which the various data centers correspond and an identification of a master data center that bears the competitive task.
[0038] Preferably, the method further comprises:
[0039] synchronizing the traffic configuration information into cache by the multi-active switching platform, so that the application server obtains the traffic configuration information from the cache; and
[0040] sending the latest traffic configuration information to the application server when the Date Recue/Date Received 2022-05-24 multi-active switching platform receives a traffic configuration information obtaining request from the application server.
[0041] In the third aspect, there is provided a traffic switching device based on multiple active data centers, and the device comprises:
[0042] a traffic configuration information obtaining unit, for performing an operation to obtain traffic configuration information after having received a task scheduling instruction, wherein the traffic configuration information is generated according to a preset rule when a multi-active switching platform judges according to data transmission status information of various data centers that traffic switching is required by a data center, the multiple active data centers include at least two data centers, and the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds;
[0043] a parsing unit, for parsing the traffic configuration information, and obtaining traffic distribution to which the data center in which the application server resides corresponds;
[0044] a permission judging unit, for judging according to the traffic distribution and type information of a task to be currently processed whether there is a permission to process the task to be currently processed; and
[0045] a task processing unit, for obtaining task configuration information when it is judged that there is a processing permission, and loading a task in conjunction with the traffic distribution to process the task.
[0046] In the fourth aspect, there is provided a traffic switching device based on multiple active data centers, and the device comprises:
[0047] a data transmission status information obtaining unit, for obtaining data transmission status information of various data centers, wherein the multiple active data centers include at least two data centers; and
[0048] a traffic configuration information unit, for judging according to the status information and a preset condition, and generating traffic configuration information according to a Date Recue/Date Received 2022-05-24 preset rule when it is judged that traffic switching is required, so that an application server obtains the traffic configuration information and loads a task in conjunction with task configuration information as obtained to process the task after having received a task scheduling instruction, wherein the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds.
[0049] According to the specific embodiments provided by the present application, the present application has disclosed the following technical effects.
[0050] The technical solutions of the present application make it possible to generate and obtain multi-active traffic configuration information automatically in real time under the scenario of multiple active data centers, and to proactively compensate to obtain multi-active traffic configuration information under the scenario of missing configuration.
[0051] Scheduling of tasks in the present application is capable of recognizing and parsing multi-active traffic configuration information, and supporting automatic switching of exclusive tasks and competitive tasks between computer rooms for the execution of business operation.
[0052] Task configuration and anti-concurrent operation in the present application are based on distributed cache, whereby performance consumption of the database is lowered.
[0053] Of course, it suffices for the product of the present application to achieve one of these effects.
BRIEF DESCRIPTION OF THE DRAWINGS
[0054] In order to describe the embodiments of the present application or the technical solutions in prior-art technology more clearly, accompanying drawings required to be used in the Date Recue/Date Received 2022-05-24 embodiments are briefly introduced below. Apparently, the drawings introduced below are merely partial embodiments of the present application, and it is possible for persons ordinarily skilled in the art to acquire other drawings based on these drawings without spending creative effort in the process.
[0055] Fig. 1 is a view illustrating a system scenario provided by the present application;
[0056] Fig. 2 is a flowchart illustrating processing of an exclusive task provided by the present application;
[0057] Fig. 3 is a flowchart illustrating processing of a competitive task provided by the present application;
[0058] Fig. 4 is a flowchart illustrating the method of Embodiment 1 of the present application;
and
[0059] Fig. 5 is a flowchart illustrating the method of Embodiment 2 of the present application.
DETAILED DESCRIPTION OF THE INVENTION
[0060] The technical solutions in the embodiments of the present application will be clearly and comprehensively described below with reference to the accompanying drawings in the embodiments of the present application. Apparently, the embodiments to be described are merely partial, rather than the entire embodiments of the present application.
All other embodiments obtainable by persons ordinarily skilled in the art on the basis of the embodiments in the present application shall all be covered by the protection scope of the present application.
[0061] To make the present application easily comprehensible, terms appearing in the present Date Recue/Date Received 2022-05-24 application are firstly explained.
[0062] Multi-active switching platform: it is an administration platform developed for the configuration, administration, and execution of traffic switching of the multiple active data centers, by maintaining various application systems and component information within the platform, and through the configuration switching step and multi-scenarios switching tasks such as switching of the master data center level and switching of non-master data center level, execution and administration of switching of single data center traffic and switching of multiple active data center traffic are realized, the traffic switching task after failure of the multiple active data centers is undertaken, and it is ensured that switching be timely, comprehensive, visualizable and controllable.
[0063] Cell: it is a set of data of minimally split dimensions and the data center after splitting is performed according to designated data dimensions. On the logical level, a cell can complete the entire business on data splits within the cell. When a user request is designated with a cell belonged to according to the dimension by which the data is split, the subsequent business of the user is completely enclosed within the cell.
One cell can be a sub-library.
[0064] Data center LDC: it is a set unit consisting of plural Cells with enclosable businesses. To realize disaster recovery, various data centers of the multiple active data centers are also referred to as computer rooms whose geographical locations are usually distanced relatively far from one another.
[0065] Exclusive task: business data processed thereby exists only in a certain Cell, without intercrossing and being shared with other Cells.
[0066] Competitive task: business data processed thereby is competed among various Cells, in order to prevent the business data from being respectively processed by different data Date Recue/Date Received 2022-05-24 centers to render the data inconsistent, business data of competitive tasks should be uniformly controlled by the same and single data center. The data center capable of processing competitive tasks is referred to as master data center in the present application.
[0067] Traffic configuration information: it is information for indicating traffic distribution of the various data centers, including identifications of the various data centers and their corresponding Cell sets, such as a set of sub-library numbers to which each data center corresponds and which represents the set of sub-library data manipulable by each data center.
[0068] For example, cache key is LdcInfo, and values are the values of the various data centers LDC and the responsible cell set. Exemplarily, [{"effectiveLdc": "NJYH","cellList": "0,2,4,6,8,10,12,14"}, {"effectiveLdc": "NJGX YG","cellList": "1,3,5,7,9,11,13,15"}1, if a system business library has 16 sub-libraries, the total traffic can be partitioned into 16 Cells, if the entire traffic is partitioned to the master data center, values configured for the cellList of the master data center are 0-15, and the sub data centers are empty; in the case of two data centers, traffic is partitioned by 1/2, then values configured for the cellList of the master data center are a set of even numbers in 0-15, and values configured for the cellList of the sub data center are a set of odd numbers in 0-15. So on and so forth, the numerical values configured in the cellList represent sub-library numbers with write permission.
[0069] The traffic configuration information further includes data center configuration where the master data center resides: the cache key is MasterLdc (master data center), the value is an English acronym of the master data center. For example, if the master data center is a Yuhua computer room of Nanj ing, then the value is NJYH, and such configuration is used for the competitive task to judge whether the current server belongs to the master data center.
[0070] Environment variable: the variable named ldc is configured in the server environment variable, and the value is an English acronym of the data center in which the current Date Recue/Date Received 2022-05-24 server is deployed. For example, if it is deployed in the Yuhua computer room of Nanjing, NJYH is configured.
[0071] As shown in Fig. 1, the system of the present application includes multiple active data centers (Fig. 1 shows three data centers, namely machine rooms), each data center includes a multi-active switching platform, a task scheduling platform, an application cluster, and a redis distributed cache cluster. Corresponding to whether to process competitive tasks, the data centers can be divided into master data center (master machine room) and sub data center(s) (sub machine room(s)).
[0072] The multi-active switching platform is used to generate traffic configuration information, and it is possible in the present application for the multi-active switching platform of the master data center to generate traffic configuration information and thereafter synchronize the same to the multi-active switching platforms of the sub data centers. On monitoring that there is new traffic configuration information, the application server of the application cluster reads the traffic configuration information and synchronizes the traffic configuration information to the redis distributed cache cluster. The task scheduling platform is used to schedule tasks by dispatching task scheduling instructions to various application servers of the application cluster to process the tasks. The application server reads the traffic configuration information from the redis distributed cache cluster according to a task scheduling instruction, if reading fails, the application server reads the traffic configuration information directly from the multi-active switching platform and stores the same in the redis distributed cache cluster, to facilitate quick reading next time. The application server will subsequently read task configuration information from the redis distributed cache cluster, and execute relevant task processing according to the traffic configuration information and the task configuration information.
This step will be described in detail later.
[0073] It is mentioned above that the multi-active switching platform automatically generates Date Recue/Date Received 2022-05-24 the traffic configuration information, and this is the first problem to be solved by the present application. The multi-active switching platform is utilized in the present application to monitor data transmission statuses of the various data centers, such as the data transmission rate, etc., when it is judged according to the monitoring that failure occurs to data transmission of any data center or any other event occurs to trigger traffic switching, the traffic configuration information is automatically generated according to a preset rule.
[0074] Taking for example the traffic configuration information being directed to the set of sub-libraries with readable-writable operations of the various data centers, the preset rule can be to distribute sub-libraries of a failed data center to other data centers as evenly as possible under the circumstance the original traffic distributions to the other data centers remain unchanged, and can also be to distribute sub-libraries of the failed data center to the data center with the least traffic at present. The preset rule can as well be to redistribute the entire traffic to the remaining data centers.
[0075] Moreover, it is further possible to perform operation in conjunction with the current statuses of remaining data centers that are not failed, for instance, business volumes of certain events of certain data centers are abruptly increased, then the traffic should not be distributed to these data centers as far as possible.
[0076] In addition, based on the presence of the aforementioned competitive tasks, if the data center to which failure occurs is the data center that is responsible for the competitive tasks, namely the master data center, it is further needed to designate a new master data center for the competitive tasks in the traffic configuration information.
[0077] In short, the traffic configuring rule can be set in advance at the traffic switching platform, so that the platform automatically generates the traffic configuration information according to this rule and data transmission statuses of the various data centers.

Date Recue/Date Received 2022-05-24
[0078] Thereafter is involved how the application servers of the various data centers recognize the traffic configuration information and execute task processing based on the recognition result.
[0079] We can firstly base on the fact as to whether traffic distribution to which the application server belongs is empty to preliminarily judge whether the application server can execute the current task.
[0080] Each application server is equipped with information as to which data center it belongs at the start of its setup, such information is configured in the environment variable value of the application server, for instance, the environment variable value of an application server is "Beijing Haidian", then the application server belongs to the data center named "Beijing Haidian".
[0081] The application server parses the traffic configuration information and can thus obtain traffic distributions to which the various data centers correspond, such as sets of sub-library numbers with read-write permission to which the various data centers correspond.
[0082] For instance, a database has 16 sub-libraries altogether, numbered respectively as 1 to 16.
The traffic configuration information is parsed to determine that the master data center corresponds to sub-libraries 1-7, that the first sub data center corresponds to sub-libraries 8-12, and that the second sub data center corresponds to sub-libraries 13-16.
If the application server belongs to the first sub data center, the application server has read-write permission with respect to sub-libraries 8-12, that is, it can bear traffic tasks relevant to sub-libraries 8-12.
[0083] If the traffic configuration information is parsed to determine that the traffic distribution of the data center to which the application server belongs is empty, namely not Date Recue/Date Received 2022-05-24 corresponding to any sub-library, this indicates that the application server does not have read-write permission with respect to any sub-library, cannot execute any task, and directly exits the process at this time.
[0084] It is previously mentioned that tasks are divided into independent tasks and competitive tasks, as regards independent tasks, as shown in Fig. 2, it is judged by parsing the traffic configuration of the current data center in the multi-active traffic configuration information whether the current task is operable ¨ it is operable if the traffic configuration, such as the sub-library number, has a value, and it is not operable if the traffic configuration, such as the sub-library number, is an empty set.
[0085] The competitive tasks are processed by the special master data center.
Therefore, when the task to be currently processed is a competitive task, it is further required to judge whether the application server is a server of the master data center. As shown in Fig. 3, corresponding to such requirement, an identification of the master data center is further provided in the traffic configuration information. The application server obtains the identification of the data center based on the environment variable, and compares the same with the identification of the master data center, if the two are consistent, this indicates that the application server is a server of the master data center and usable for executing the competitive task, if the two are inconsistent, this indicates that the application server is not a server of the master data center and cannot be used for executing the competitive task. When the current task is a competitive task, the server can directly exit the process.
[0086] The application server can judge in advance whether the current task can be executed through the task type, the traffic distribution to the data center, and the information of the identification of the master data center, and directly exits if it cannot execute the current task.
Date Recue/Date Received 2022-05-24
[0087] In the case it is preliminarily judged that there is permission to execute the current task, the application server further obtains task configuration, and loads to execute the specific task.
[0088] Specifically, the application server takes JOB QUEUE: task name as KEY
to obtain task from the queue head of a Redis cached task queue, if no task is obtained, JOB TASKPENDING: task name is taken as KEY to obtain task configuration information of sub-libraries with write permission from Redis totally scheduled task cache and load one-by-one to the queue tail of the Redis cached task queue. If the task configuration information is not enquired out of the Redis total cache according to the sub-library numbers and the task name, the database is read, and the task is enquired from the public library and loaded to the Redis totally scheduled task cache, and further synchronized to the Redis cached task queue. The task configuration information contains the sub-library number to which the task corresponds, when the task configuration information of sub-libraries with write permission is obtained from the Redis totally scheduled task cache, the permissioned sub-library number of the application server can be combined to get the task that corresponds to the intersection of the corresponding sub-library number of the task for loading.
[0089] If the task is obtained from the queue head of the Redis queue in accordance with the JOB QUEUE: task name serving as KEY, it is then judged whether the task as obtained is currently within an operable range to avoid traffic switching at the computer room after the task has been loaded to the queue. If the task is within the operable range, the task status in the task configuration cache is updated from to be processed to being processed, and business data in the sub-library is processed when update succeeds, and the task status is updated to to be processed on completion of the processing. If update fails or the task status is being processed already or the currently obtained task is not in the operable range, task is continually obtained from the queue head of the Redis queue, until messages in the Redis queue have all been consumed. The exclusive task is judged whether being in Date Recue/Date Received 2022-05-24 the operable range according to the sub-library number of the task and the CellList configuration, and the competitive task is judged whether being in the operable range according to the master computer room LDC and the LDC in the environment variable of the current server.
[0090] The problem concerning concurrent operation can also occur in the above process, in view thereof, the present application provides the following method to avoid concurrent operation on the basis of Redis cache, and the method specifically comprises the following:
[0091] the application server takes JOB QUEUE: task name as KEY to obtain task configuration from a queue head of a Redis task queue, and judges whether the task configuration is obtained.
[0092] If the task configuration is obtained:
[0093] It is judged whether the task configuration is currently operable, to avoid problematic switching at the computer room after it has been loaded to the task queue. The current task is judged whether being operable by parsing CellList configuration of the current computer room LDC in the multi-active configuration in the case of an exclusive task, and the current task is judged whether being operable by parsing LDC
configuration of the master computer room in the multi-active configuration and LDC
configuration in the environment variable of the current server in the case of a competitive task.
[0094] 1. If not operable, processing of the current task is terminated, and the next piece of task configuration is obtained from the task queue to continue execution.
[0095] 2. If operable, task name + sub-library number are taken as KEY to set up a Redis shared lock, and timeout is current system time + timeout constant value (millisecond), specifically:
[0096] 2.1 If setup of the shared lock fails, processing of the current task is terminated, and the next piece of task configuration is obtained from the task queue to continue execution.
[0097] 2.2 If setup of the shared lock succeeds, cache to which the task configuration Date Recue/Date Received 2022-05-24 corresponds is obtained from the totally scheduled task cache:
[0098] 2.21 If the cache to which the task configuration corresponds is not obtained from the totally scheduled task cache, task configuration of the sub-library is enquired in the public library, and loaded to the total task cache if enquired.
[0099] 2.22 The task status is judged in the total task cache:
[0100] 2.221 If the status is to be processed, the status is updated to being proceeded. If update fails, the shared lock is released to terminate processing of the current task configuration, and the next piece of task configuration is obtained from the task queue to continue execution. If update succeeds, the shared lock is released, and specific business logic to which the task corresponds is executed; when the business logic is executed to completion, the task status is changed into to be processed, processing of the current task configuration is terminated, and the next piece of task configuration is obtained from the task queue to continue execution.
[0101] 2.222 If the status is being processed, the shared lock is released, processing of the current task configuration is terminated, and the next piece of task configuration is obtained from the task queue to continue execution.
[0102] If the task configuration is not obtained:
[0103] It is judged whether it is required to load task configuration (if the task is obtained for the first time from the queue, the task configuration is empty, then the task configuration should be loaded; if the task has previously acquired task configuration, the task configuration is empty at the last obtainment, then it is not loaded, to prevent the task from being continually scheduled and executed without ending), if it is not required to load task configuration, exiting is effected; if it is required to load task configuration, the following steps are executed:
[0104] 1. JOB TASK LOAD LOCK: task name is taken as KEY, and current system time +
invalidation time constant value (millisecond) are taken as Value to perform setnx operation on Redis to add shared lock, so as to prevent concurrent scheduling from causing repeated loading of the task to be processed to the redis task queue.

Date Recue/Date Received 2022-05-24
[0105] 1.1 If setup of the shared lock fails, the shared lock is checked as to whether it is invalidated, to prevent abnormal unlocking from causing the task always in the locked status; the shared lock is not invalidated if its value is greater than the current system time, and the shared lock is invalidated if its value is smaller than the current system time:
[0106] 1.11. If the shared lock is not invalidated, exiting is effected;
[0107] 1.12. If the shared lock is invalidated, value of the shared lock is obtained 0, GetSet operation of Redis is then performed on the shared lock 0, the new value is system current time + invalidation time constant value (millisecond), and values returned from 0 and 0 are compared. If the values returned from 0 and 0 are unequal, there is concurrent operationand direct exiting is effected. If the values returned from 0 and 0 are equal, task configuration can be loaded, and JOB TASKPENDING: task name is taken as KEY to obtain task total configuration cache from Redis:
[0108] 1.2 It is judged whether task configuration is obtained from cache: if task configuration is not obtained, a task scheduling sheet of the public library is enquired according to the task name, and the task configuration is loaded to the total task configuration cache.
[0109] 1.3 Any task configuration whose status is being processed is screened out.
[0110] 1.4 It is judged whether the current task is a competitive task or an exclusive task (differentiation is made according to functional businesses, task types are determinable and hard coded in codes before the codes are written):
[0111] 1.41. In the case of an exclusive task, an intersection between CellList and the library number of the task to be processed is calculated, the calculated intersection is pushed to the Redis queue whose KEY is JOB QUEUE: task name, and JOB TASK LOAD LOCK: task name is released to serve as the shared lock of KEY;
[0112] 1.42. In the case of a competitive task, task configurations to be processed are all pushed to the Redis queue whose KEY is JOB QUEUE: task name, and JOB TASK LOAD LOCK: task name is released to serve as the shared lock of KEY.
[0113] Seen as such, traffic switching of the multiple active data centers is changed from the original manual modification of the configuration file to automatic recognition of the Date Recue/Date Received 2022-05-24 instruction of the switching platform to perform switching in real time in the present application, whereby availability of the system is enhanced, and business congestion time and huge financial loss caused by traffic switching due to failures are reduced.
[0114] Reading and writing of task configurations and prevention of concurrent operations are based on the Redis cache, whereby performance consumption of the database is greatly reduced, the upper limit of the concurrent quantity of tasks is enlarged, and execution speed of tasks is accelerated.
[0115] Enquiry of tasks to be processed is based on the Redis queue, whereby the number of times for which the system traverses the totally scheduled task cache configurations is greatly reduced, the number of times for which Redis is accessed is greatly reduced, and execution speed of tasks is accelerated.
[0116] Embodiment 1
[0117] In summary, Embodiment 1 of the present application provides a traffic switching method based on multiple active data centers, as shown in Fig. 4, the method comprises the following.
[0118] S41 - performing an operation to obtain traffic configuration information after an application server has received a task scheduling instruction, wherein the traffic configuration information is generated according to a preset rule when a multi-active switching platform judges according to data transmission status information of various data centers that data transmission failure occurs to a data center, the multiple active data centers include at least two data centers, and the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds.
[0119] The application server obtains the traffic configuration information through the following Date Recue/Date Received 2022-05-24 steps:
[0120] reading cache by the application server and judging whether the traffic configuration information is present in the cache;
[0121] if not, reading the traffic configuration information from the multi-active switching platform by the application server.
[0122] In addition, when the application server monitors that change occurs to the traffic configuration information of the multi-active switching platform, the changed traffic configuration information will be read and synchronized into the cache.
[0123] S42 ¨ parsing the traffic configuration information by the application server, and obtaining traffic distribution to which the data center in which the application server resides corresponds.
[0124] S43 ¨ judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed.
[0125] This step specifically includes: judging, if the application server judges that the task to be currently processed is an exclusive task, whether the traffic distribution to which the data center in which the application server resides corresponds is empty;
[0126] if not, the application server possesses the permission to process the task to be currently processed.
[0127] The traffic distribution can include a set of sub-library numbers with read-write permission to which each data center corresponds;
[0128] the step of judging whether the traffic distribution to which the data center in which the application server resides corresponds is empty includes:
[0129] judging whether the set of sub-library numbers with read-write permission to which the Date Recue/Date Received 2022-05-24 data center in which the application server resides corresponds is empty.
[0130] S44 - if yes, loading the task by the application server to process the task.
[0131] In a preferred embodiment, the multiple active data centers have a master data center, and the traffic configuration information further includes identification of the master data center;
[0132] the step of judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed includes:
[0133] judging, if the application server judges that the task to be currently processed is a competitive task, whether a data center identification to which the application server corresponds is identical with the master data center identification;
[0134] if yes, the application server possesses the permission to process the task to be currently processed.
[0135] In a preferred embodiment, the traffic distribution includes a set of sub-library numbers with read-write permission to which each data center corresponds, and the step of loading the task by the application server to process the task includes:
[0136] searching by the application server for the task to be currently processed from a task queue of the cache, if the task is enquired out, judging whether the application server has a permission to process the task to be currently processed according to a sub-library number to which the task to be currently processed corresponds and according to a sub-library number with read-write permission of the data center in which the application server resides;
[0137] if yes, determining by the application server a status of the sub-library number to which the task to be currently processed corresponds as being processed and storing the same in task configuration information;
[0138] changing, if the task is processed to completion, by the application server the status of Date Recue/Date Received 2022-05-24 the sub-library number to which the task to be currently processed corresponds to to be processed and storing the same in the task configuration information.
[0139] Embodiment 2
[0140] Corresponding to the aforementioned application server, Embodiment 2 of the present application provides a traffic switching method based on multiple active data centers, as shown in Fig. 5, the method comprises:
[0141] S51 - obtaining data transmission status information of various data centers by a multi-active switching platform, wherein the multiple active data centers include at least two data centers; and
[0142] S52 -judging by the multi-active switching platform according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required, so that an application server obtains the traffic configuration information and loads a task in conjunction with task configuration information as obtained to process the task after having received a task scheduling instruction, wherein the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds.
[0143] The step of judging by the multi-active switching platform according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required includes:
[0144] performing traffic distribution, when the multi-active switching platform judges according to the status information that data transmission failure occurs to any data center, according to current traffic of data center(s) to which no failure occurs, a traffic threshold, and a rule to distribute traffic to which a competitive task corresponds to the same and single data center, to generate traffic configuration information that contains traffic distributions to which the various data centers correspond and an identification of a master data center that bears the competitive task.

Date Recue/Date Received 2022-05-24
[0145] Preferably, the method further comprises:
[0146] synchronizing the traffic configuration information into cache by the multi-active switching platform, so that the application server obtains the traffic configuration information from the cache; and
[0147] sending the latest traffic configuration information to the application server when the multi-active switching platform receives a traffic configuration information obtaining request from the application server.
[0148] Embodiment 3
[0149] Corresponding to the above Embodiment 1, Embodiment 3 of the present application provides a traffic switching device based on multiple active data centers, and the device comprises:
[0150] a traffic configuration information obtaining unit, for performing an operation to obtain traffic configuration information after having received a task scheduling instruction, wherein the traffic configuration information is generated according to a preset rule when a multi-active switching platform judges according to data transmission status information of various data centers that traffic switching is required by a data center, the multiple active data centers include at least two data centers, and the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds; preferably, the traffic configuration information obtaining unit is specifically employed for reading cache and judging whether the traffic configuration information is present in the cache; if not, reading the traffic configuration information from the multi-active switching platform;
[0151] a parsing unit, for parsing the traffic configuration information, and obtaining traffic distribution to which the data center in which the application server resides corresponds;
[0152] a permission judging unit, for judging according to the traffic distribution and type information of a task to be currently processed whether there is a permission to process Date Recue/Date Received 2022-05-24 the task to be currently processed; preferably, the permission judging unit is specifically employed for judging, when it is judged that the task to be currently processed is an exclusive task, whether the traffic distribution to which the data center in which the application server resides corresponds is empty; if not, determining that there is the permission to process the task to be currently processed; and
[0153] a task processing unit, for obtaining task configuration information when it is judged that there is a processing permission, and loading a task in conjunction with the traffic distribution to process the task.
[0154] Embodiment 4
[0155] Corresponding to the above Embodiment 2, Embodiment 4 of the present application provides a traffic switching device based on multiple active data centers, and the device comprises:
[0156] a data transmission status information obtaining unit, for obtaining data transmission status information of various data centers, wherein the multiple active data centers include at least two data centers; and
[0157] a traffic configuration information unit, for judging according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required, so that an application server obtains the traffic configuration information and loads a task in conjunction with task configuration information as obtained to process the task after having received a task scheduling instruction, wherein the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds.
[0158] Preferably, the traffic configuration information unit is specifically employed for performing traffic distribution, when it is judged according to the status information that data transmission failure occurs to any data center, according to current traffic of data center(s) to which no failure occurs, a traffic threshold, and a rule to distribute traffic to Date Recue/Date Received 2022-05-24 which a competitive task corresponds to the same and single data center, to generate traffic configuration information that contains traffic distributions to which the various data centers correspond and an identification of a master data center that bears the competitive task.
[0159] Preferably, the device further comprises:
[0160] a traffic configuration information synchronizing unit, for synchronizing the traffic configuration information into cache, so that the application server obtains the traffic configuration information from the cache; and
[0161] a traffic configuration information sending unit, for sending the latest traffic configuration information to the application server when a traffic configuration information obtaining request is received from the application server.
[0162] As can be known through the description to the aforementioned embodiments, it is clearly learnt by person skilled in the art that the present application can be realized through software plus a general hardware platform. Based on such understanding, the technical solutions of the present application, or the contributions made thereby over the state of the prior art, can be essentially embodied in the form of a software product, and such a computer software product can be stored in a storage medium, such as an ROM/RAM, a magnetic disk, an optical disk etc., and includes plural instructions enabling a computer equipment (such as a personal computer, a cloud server, or a network device etc.) to execute the methods described in various embodiments or some sections of the embodiments of the present application.
[0163] The various embodiments are progressively described in the Description, identical or similar sections among the various embodiments can be inferred from one another, and each embodiment stresses what is different from other embodiments.
Particularly, with respect to the system or system embodiment, since it is essentially similar to the method embodiment, its description is relatively simple, and the relevant sections thereof can be Date Recue/Date Received 2022-05-24 inferred from the corresponding sections of the method embodiment. The system or system embodiment as described above is merely exemplary in nature, units therein described as separate parts can be or may not be physically separate, parts displayed as units can be or may not be physical units, that is to say, they can be located in a single site, or distributed over a plurality of network units. It is possible to select partial modules or the entire modules based on practical requirements to realize the objectives of the embodied solutions. It is understandable and implementable by persons ordinarily skilled in the art without spending creative effort in the process.
[0164] The traffic switching methods and devices provided by the present application are described in detail above, specific concrete examples are employed in this paper to enunciate the principles and modes of execution of the present application, and descriptions of the above embodiments are merely meant to help understand the methods and core conceptions of the present application; at the same time, persons generally skilled in the art may make variations to both the specific modes of execution and the range of application based on the conception of the present application. In summary, the contents of this Description shall not be understood to restrict the present application.

Date Recue/Date Received 2022-05-24

Claims (10)

What is claimed is:
1. A traffic switching method based on multiple active data centers, characterized in that the method comprises:
performing an operation to obtain traffic configuration information after an application server has received a task scheduling instruction, wherein the traffic configuration information is generated according to a preset rule by a multi-active switching platform when it judges according to data transmission status information of various data centers that traffic switching is required by a data center, the multiple active data centers include at least two data centers, and the traffic configuration information is employed to indicate traffic distribution to each data center;
parsing the traffic configuration information by the application server, and obtaining traffic distribution to which the data center in which the application server resides corresponds;
judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed; and if yes, loading the task by the application server to process the task.
2. The method according to Claim 1, characterized in that the application server obtains the traffic configuration information through the following steps:
reading cache by the application server and judging whether the traffic configuration information is present in the cache;
if not, reading the traffic configuration information from the multi-active switching platform by the application server; and reading, when the application server monitors that change occurs to the traffic configuration information of the multi-active switching platform, the changed traffic configuration information and synchronizing the changed traffic configuration information into the cache.
3. The method according to Claim 1, characterized in that the step of judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed includes:
judging, if the application server judges that the task to be currently processed is an exclusive task, whether the traffic distribution to which the data center in which the application server resides corresponds is empty;
if not, the application server possesses the permission to process the task to be currently processed.
4. The method according to Claim 1, characterized in that the multiple active data centers havea master data center, that the traffic configuration information further includes identification of the master data center; and that the step of judging by the application server according to the traffic distribution and type information of a task to be currently processed whether the application server has a permission to process the task to be currently processed includes:
judging, if the application server judges that the task to be currently processed is a competitive task, whether a data center identification to which the application server corresponds is identical with the master data center identification;
if yes, the application server possesses the permission to process the task to be currently processed.
5. The method according to Claim 1, characterized in that the traffic distribution includes a set of sub-library numbers with read-write permission to which each data center corresponds, and that the step of loading the task by the application server to process the task includes:
searching by the application server for the task to be currently processed from a task queue of the cache, if the task is enquired out, judging whether the application server has a permission to process the task to be currently processed according to a sub-library number to which the task to be currently processed corresponds and according to a sub-library number with read-write permission of the data center in which the application server resides;
if yes, determining by the application server a status of the sub-library number to which the task to be currently processed corresponds as being processed and storing the same in task configuration information;
changing, if the task is processed to completion, by the application server the status of the sub-library number to which the task to be currently processed corresponds to to be processed and storing the same in the task configuration information.
6. A traffic switching method based on multiple active data centers, characterized in that the method comprises:
obtaining data transmission status information of various data centers by a multi-active switching platform, wherein the multiple active data centers include at least two data centers; and judging by the multi-active switching platform according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required, so that an application server obtains the traffic configuration information and loads a task in conjunction with task configuration information as obtained to process the task after having received a task scheduling instruction, wherein the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds.
7. The method according to Claim 6, characterized in that the step of judging by the multi-active switching platform according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required includes:
performing traffic distribution, when the multi-active switching platform judges according to the status information that data transmission failure occurs to any data center, according to current traffic of data center(s) to which no failure occurs, a traffic threshold, and a rule to distribute traffic to which a competitive task corresponds to the same and single data center, to generate traffic configuration information that contains traffic distributions to which the various data centers correspond and an identification of a master data center that bears the competitive task.
8. The method according to Claim 6, characterized in further comprising:
synchronizing the traffic configuration information into cache by the multi-active switching platform, so that the application server obtains the traffic configuration information from the cache; and sending the latest traffic configuration information to the application server when the multi-active switching platform receives a traffic configuration information obtaining request from the application server.
9. A traffic switching device based on multiple active data centers, characterized in that the device comprises:
a traffic configuration information obtaining unit, for performing an operation to obtain traffic configuration information after having received a task scheduling instruction, wherein the traffic configuration information is generated according to a preset rule by a multi-active switching platform when it judges according to data transmission status information of various data centers that traffic switching is required by a data center, the multiple active data centers include at least two data centers, and the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds;
a parsing unit, for parsing the traffic configuration information, and obtaining traffic distribution of the data center in which the application server resides;
a permission judging unit, for judging according to the traffic distribution and type information of a task to be currently processed whether there is a permission to process the task to be currently processed; and a task processing unit, for obtaining task configuration information when it is judged that there is a processing permission, and loading a task in conjunction with the traffic distribution to process the task.
10. A traffic switching device based on multiple active data centers, characterized in that the device comprises:
a data transmission status information obtaining unit, for obtaining data transmission status information of various data centers, wherein the multiple active data centers include at least two data centers; and a traffic configuration information unit, for judging according to the status information and a preset condition, and generating traffic configuration information according to a preset rule when it is judged that traffic switching is required, so that an application server obtains the traffic configuration information and loads a task in conjunction with task configuration information as obtained to process the task after having received a task scheduling instruction, wherein the traffic configuration information is employed to indicate traffic distribution to which each data center corresponds.
CA3162740A 2019-11-26 2020-06-19 Traffic switching methods and devices based on multiple active data centers Pending CA3162740A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201911174942.0 2019-11-26
CN201911174942.0A CN110990200B (en) 2019-11-26 2019-11-26 Flow switching method and device based on multiple active data centers
PCT/CN2020/097003 WO2021103499A1 (en) 2019-11-26 2020-06-19 Multi-active data center-based traffic switching method and device

Publications (1)

Publication Number Publication Date
CA3162740A1 true CA3162740A1 (en) 2021-06-03

Family

ID=70086988

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3162740A Pending CA3162740A1 (en) 2019-11-26 2020-06-19 Traffic switching methods and devices based on multiple active data centers

Country Status (3)

Country Link
CN (1) CN110990200B (en)
CA (1) CA3162740A1 (en)
WO (1) WO2021103499A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114331576A (en) * 2021-12-30 2022-04-12 福建博思软件股份有限公司 Electronic ticket number rapid ticket taking method based on high concurrency scene and storage medium
CN114465960A (en) * 2022-02-07 2022-05-10 北京沃东天骏信息技术有限公司 Flow switching method and device and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990200B (en) * 2019-11-26 2022-07-05 苏宁云计算有限公司 Flow switching method and device based on multiple active data centers
CN113300966B (en) * 2020-07-27 2024-05-28 阿里巴巴集团控股有限公司 Flow control method, device, system and electronic equipment
CN112751782B (en) * 2020-12-29 2022-09-30 微医云(杭州)控股有限公司 Flow switching method, device, equipment and medium based on multi-activity data center
CN113590314A (en) * 2021-07-13 2021-11-02 上海一谈网络科技有限公司 Network request data processing method and system
CN117453150B (en) * 2023-12-25 2024-04-05 杭州阿启视科技有限公司 Method for implementing multiple instances of video storage scheduling service

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7870277B2 (en) * 2007-03-12 2011-01-11 Citrix Systems, Inc. Systems and methods for using object oriented expressions to configure application security policies
US8370835B2 (en) * 2009-03-12 2013-02-05 Arend Erich Dittmer Method for dynamically generating a configuration for a virtual machine with a virtual hard disk in an external storage device
US9185166B2 (en) * 2012-02-28 2015-11-10 International Business Machines Corporation Disjoint multi-pathing for a data center network
CN103888378B (en) * 2014-04-09 2017-08-25 北京京东尚科信息技术有限公司 A kind of data exchange system and method based on caching mechanism
US9565129B2 (en) * 2014-09-30 2017-02-07 International Business Machines Corporation Resource provisioning planning for enterprise migration and automated application discovery
CN104407964B (en) * 2014-12-08 2017-10-27 国家电网公司 A kind of centralized monitoring system and method based on data center
CN104506614B (en) * 2014-12-22 2018-07-31 国家电网公司 A kind of design method at the more live data centers of distribution based on cloud computing
CN106980625B (en) * 2016-01-18 2020-08-04 阿里巴巴集团控股有限公司 Data synchronization method, device and system
CN107231221B (en) * 2016-03-25 2020-10-23 阿里巴巴集团控股有限公司 Method, device and system for controlling service flow among data centers
CN106506588A (en) * 2016-09-23 2017-03-15 北京许继电气有限公司 How polycentric data center's dual-active method and system
CN109819004B (en) * 2017-11-22 2021-11-02 中国人寿保险股份有限公司 Method and system for deploying multi-activity data centers
CN108089923A (en) * 2017-12-15 2018-05-29 中国民航信息网络股份有限公司 User's access area division methods and device based on weighted Voronoi diagrams figure
CN109542659A (en) * 2018-11-14 2019-03-29 深圳前海微众银行股份有限公司 Using more activating methods, equipment, data center's cluster and readable storage medium storing program for executing
CN109660466A (en) * 2019-02-26 2019-04-19 浪潮软件集团有限公司 A kind of more live load balance realizing methods towards cloud data center tenant
CN110166524B (en) * 2019-04-12 2023-04-07 未鲲(上海)科技服务有限公司 Data center switching method, device, equipment and storage medium
CN110225138B (en) * 2019-06-25 2021-12-14 深圳前海微众银行股份有限公司 Distributed architecture
CN110990200B (en) * 2019-11-26 2022-07-05 苏宁云计算有限公司 Flow switching method and device based on multiple active data centers

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114331576A (en) * 2021-12-30 2022-04-12 福建博思软件股份有限公司 Electronic ticket number rapid ticket taking method based on high concurrency scene and storage medium
CN114465960A (en) * 2022-02-07 2022-05-10 北京沃东天骏信息技术有限公司 Flow switching method and device and storage medium

Also Published As

Publication number Publication date
CN110990200A (en) 2020-04-10
WO2021103499A1 (en) 2021-06-03
CN110990200B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CA3162740A1 (en) Traffic switching methods and devices based on multiple active data centers
CN109729129B (en) Configuration modification method of storage cluster system, storage cluster and computer system
Enes et al. State-machine replication for planet-scale systems
KR101547719B1 (en) Maintaining data integrity in data servers across data centers
CN109842651B (en) Uninterrupted service load balancing method and system
CN111327467A (en) Server system, disaster recovery backup method thereof and related equipment
CN113515499B (en) Database service method and system
US20150261784A1 (en) Dynamically Varying the Number of Database Replicas
US20140019801A1 (en) Multiple hyperswap replication sessions
CN106487486B (en) Service processing method and data center system
CN101136728A (en) Cluster system and method for backing up a replica in a cluster system
US7730029B2 (en) System and method of fault tolerant reconciliation for control card redundancy
CN113282564B (en) Data storage method, system, node and storage medium
CN112190924A (en) Data disaster tolerance method, device and computer readable medium
CN111988347B (en) Data processing method of board hopping machine system and board hopping machine system
CN111240901B (en) Node dynamic expansion system, method and equipment of distributed block storage system
CN114238495A (en) Method and device for switching main cluster and standby cluster of database, computer equipment and storage medium
CN110377664B (en) Data synchronization method, device, server and storage medium
CN114328033A (en) Method and device for keeping service configuration consistency of high-availability equipment group
US20230004465A1 (en) Distributed database system and data disaster backup drilling method
CN113986450A (en) Virtual machine backup method and device
CN116389233B (en) Container cloud management platform active-standby switching system, method and device and computer equipment
CN116233245A (en) Remote multi-activity system, information processing method thereof and configuration server
CN114610545A (en) Method, system, device and medium for reducing single point of failure of private cloud computing
CN112800028A (en) Fault self-recovery method and device for MySQL group replication

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916