WO2022088861A1 - Procédé et appareil de gestion d'anomalies de bases de données - Google Patents

Procédé et appareil de gestion d'anomalies de bases de données Download PDF

Info

Publication number
WO2022088861A1
WO2022088861A1 PCT/CN2021/113235 CN2021113235W WO2022088861A1 WO 2022088861 A1 WO2022088861 A1 WO 2022088861A1 CN 2021113235 W CN2021113235 W CN 2021113235W WO 2022088861 A1 WO2022088861 A1 WO 2022088861A1
Authority
WO
WIPO (PCT)
Prior art keywords
database
upstream
downstream
data
service
Prior art date
Application number
PCT/CN2021/113235
Other languages
English (en)
Chinese (zh)
Inventor
朱绍辉
董俊峰
强群力
刘超千
赵彤
周欢
陈瑛绮
余星
韦鹏程
孟令银
王鹏
陈飞
Original Assignee
网联清算有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网联清算有限公司 filed Critical 网联清算有限公司
Publication of WO2022088861A1 publication Critical patent/WO2022088861A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques

Definitions

  • the present application relates to the field of network technologies, and in particular, to a method and apparatus for processing database faults.
  • the database administrator needs to manually modify the synchronization relationship of the downstream database, verify the data consistency between the master and the slave, and remove it from the architecture.
  • the fault library is used to establish a new topology structure, and the operation efficiency is low.
  • the present application aims to solve one of the technical problems in the related art at least to a certain extent.
  • an object of the present application is to propose a database fault processing method, so as to realize the automatic recovery of the link according to the connection between the upstream service database and the downstream service database when the target standby database fails, so as to ensure the normal effect of the entire link. .
  • the second objective of the present application is to provide a database fault processing device.
  • the third object of the present application is to propose a computer device.
  • a fourth object of the present application is to propose a non-transitory computer-readable storage medium.
  • a first aspect embodiment of the present application proposes a database fault handling method, including:
  • server address identifier determine the upstream node address identifier and the downstream node address identifier corresponding to the target standby database
  • a database fault processing device including:
  • an acquisition module configured to detect that the target standby database is faulty, and acquire the server address identifier corresponding to the target standby database
  • a determination module configured to determine, according to the server address identifier, an upstream node address identifier and a downstream node address identifier corresponding to the target standby database; a detection module, configured to detect whether the upstream service database corresponding to the upstream node address identifier is normal , and detect whether the downstream service database corresponding to the address identifier of the downstream node is normal; the repair module is used for, when both the upstream service database and the downstream service database are normal, the upstream service database and the downstream service The database performs link recovery configuration operations.
  • a third aspect of the present application provides a computer device, comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to implement the aforementioned method The database fault handling method described in the embodiment.
  • a fourth aspect of the present application provides a non-transitory computer-readable storage medium, when instructions in the storage medium are executed by a computer device processor, the computer device can execute a database Troubleshooting method.
  • the server address identifier corresponding to the target standby database is obtained, and then, according to the server address identifier, the upstream node address identifier and the downstream node address identifier corresponding to the target standby database are determined, and finally, the upstream node address is detected. Identify whether the corresponding upstream service database is normal, and detect whether the downstream service database corresponding to the downstream node address identifier is normal. If both the upstream service database and the downstream service database are normal, perform the link recovery configuration operation on the upstream service database and the downstream service database.
  • the link is automatically restored according to the connection between the upstream service database and the downstream service database, so as to ensure the normality of the entire link, and avoid the failure of data transmission to the downstream due to the interruption of the intermediate standby database.
  • FIG. 1 is a schematic flowchart of a database fault processing method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for detecting a fault in a target standby database provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a method for detecting whether an upstream service database and a downstream service database are faulty according to an embodiment of the present application
  • FIG. 4 is a flowchart of a method for synchronously restoring data to a target standby database according to an embodiment of the present application
  • FIG. 5 is a flowchart of another method for performing data synchronization restoration on a target standby database provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a database fault processing apparatus provided by an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a database fault processing method provided by an embodiment of the present application.
  • the embodiment of the present application provides a database failure processing method, In order to realize the function of active detection and active repair of database data and the function of active recovery of upstream and downstream links in real time when the target standby database fails, the method includes the following steps:
  • Step 101 detecting that the target standby database is faulty, and obtaining a server address identifier corresponding to the target standby database;
  • the failure of the target standby database can be understood as any one of different types of failures of the target standby database caused by various reasons, including but not limited to network failure, data loss, data overflow, and the like.
  • each target standby database is in an uninterrupted monitoring state.
  • the corresponding address identifier can be obtained by applying to the target standby database server, or The corresponding server address identifier is obtained by querying the server address identifier list corresponding to the target standby database.
  • Step 201 obtaining the first target data carrying the data identifier that the primary database is going to transmit to the target standby database, and generating the first code according to the target data;
  • the first target data may be understood as specified data or data segment, and may also be understood as a specified program function segment or the like.
  • the data identifier can be understood as an identifier uniquely corresponding to the specified first target data, such as the address, serial number, or name of exclusive data or function of the first target data.
  • the first encoding can be understood as the only encoded data corresponding to the first data generated after encryption, deformation or mapping according to the first target data, and the first encoding can also obtain the first target data through reverse operation.
  • the first target data carrying the data identifier that the main database is ready to transmit to the target standby database is collected in real time or according to a specified period, and then the first target data is used.
  • the target data generates a corresponding first code according to the specified processing rule.
  • Step 202 obtaining the second target data from the target standby database according to the data identifier, and generating the second code according to the second target data;
  • the data identifier of the first target data is parsed from the first target data or the first code, the second target data corresponding to the target standby database is applied for according to the data identifier, and the second target data corresponding to the data identifier is obtained.
  • the target data is used to generate a corresponding second code according to the specified processing rule by using the second target data.
  • the processing rule used for generating the second code may be the same as or different from the processing rule used for generating the first code.
  • Step 203 Calculate the first code and the second code according to a preset algorithm, if the calculation result is the preset first identifier, then determine that the target standby database failure is an application failure, and if the calculation result is the preset second identifier, Then it is determined that the target standby database failure is a server failure.
  • the preset algorithm can be understood as a neural network model trained in advance, the input data of the neural network model is the first code and the second code, and the output data is a type of calculation result that can determine the fault type of the target standby database.
  • an application failure can be understood as an operation failure of data, programs, or algorithmic processes stored in the server.
  • the preset algorithm may also be a digital logic algorithm such as exclusive OR or the like.
  • the first code and the second code are calculated according to a preset algorithm, the calculation result is obtained, and the calculation result is used to match the preset first identifier and the preset second identifier. If the calculation result matches the first identifier If successful, it is determined that the failure of the target standby database is an application failure, and if the calculation result matches the second identifier successfully, it is determined that the failure of the target standby database is a server failure.
  • the first identifier and the second identifier are respectively used to indicate the application failure and server failure of the target standby database, and the specific content of the first identifier and the second identifier are related to the preset algorithm, for example, when the preset algorithm is a digital logic operation , the first identifier may be "001", the second identifier may be "010", and so on.
  • the system will send a first test data to all target standby databases according to a specified period, and then obtain the second test data returned by each database based on the first test data, and compare each second test data with the first test data. data to determine if any databases have failed, and the type of failure.
  • the system sends the first test data to each database according to a preset period, and then obtains the second test data corresponding to the first test data received by each database after feedback within a specified period of time, wherein, the second test data corresponding to the first test data may be determined based on the timestamp or the signature of the first test data. If there is a target standby database that does not send the second test data within the specified time, it is determined that the target standby database has a server failure. If the second test data sent back by the database is different, it is determined that the target standby database has an application failure.
  • Step 102 determine the upstream node address identifier and the downstream node address identifier corresponding to the target standby database;
  • the sequence numbers of multiple node IDs in the business chain can be stored in advance. After the server address ID is obtained, the previous sequence ID and the next sequence ID are determined according to the sequence ID of the server ID. According to the previous sequence ID The number and the next sequence number determine the corresponding upstream node address identification and downstream node address identification.
  • a network topology graph is constructed between nodes according to the new business communication relationship, and a network topology connection is constructed between nodes in the network topology graph based on business relationships.
  • the nodes in the network topology map may be represented in the form of node address identification, or information that uniquely identifies the uniqueness of the node, such as a node code.
  • a preset network topology map is queried, and information about the uniqueness of the identification nodes of the upstream node and the downstream node corresponding to the faulty target standby database is obtained.
  • the information identifying the uniqueness of the node is in the form of node address identification, then the corresponding upstream node address identification and downstream node address identification can be directly obtained.
  • the corresponding relationship between the node address identifiers, and the address identifiers of the upstream node address identifier and the downstream node address identifier are obtained.
  • the upstream node address identifier and the downstream node address identifier in this embodiment may correspond to the physical address of the node and so on.
  • each downstream node backs up the business data of its upstream node. After the business data of the upstream node is backed up to the downstream node, even if When the upstream node fails, because the downstream node stores the business data of the upstream node, it can provide relevant services in place of the upstream node.
  • Step 103 detecting whether the corresponding upstream service database is normal according to the upstream node address identifier, and detecting whether the corresponding downstream service database is normal according to the downstream node address identifier;
  • the upstream node address identifier and the downstream node address identifier in this embodiment may correspond to the physical address of the node. Therefore, in this embodiment, it is detected whether the corresponding upstream service database is normal according to the upstream node address identifier. And according to the downstream node address identification to detect whether the corresponding downstream business database is normal, it is easy to understand that the failure of the current target standby database may be its own failure, or it may be caused by the failure of upstream and downstream nodes. Therefore, it needs to be based on The upstream node address identifier detects whether the corresponding upstream service database is normal, so as to locate whether the backup database is faulty.
  • the monitoring page can be understood as the front-end representation of the monitoring program.
  • Step 301 Acquire an upstream server corresponding to an address identifier of an upstream node, and acquire a downstream server corresponding to an address identifier of a downstream node;
  • Step 302 query the preset service link topology, obtain the upstream service database corresponding to the backup service database from the upstream server, and obtain the downstream service database corresponding to the backup service database from the downstream server;
  • the upstream server corresponding to the upstream node address identifier and the downstream server corresponding to the downstream node address identifier are acquired,
  • the service support of the server needs to interact with the service of the database. Therefore, query the preset service link topology, obtain the upstream service database corresponding to the faulty standby service database from the upstream server, and obtain the faulty backup service from the downstream server.
  • the downstream business database corresponding to the database is acquired,
  • Step 303 Detect the running state of the upstream service database according to the preset first monitoring page, and detect the running state of the downstream service database according to the preset second monitoring page.
  • the front-end display of the monitoring page may include multiple display modules for displaying the running status of the upstream business database and the downstream business database, wherein each module is used to display a different running status.
  • each module is used to display a different running status.
  • the data of the running status of each database in the system can be displayed on this page, including the target standby database that has failed.
  • a monitoring program for monitoring the upstream service database is preset, and a first monitoring page corresponding to the detection program is set.
  • the detection program corresponding to the first monitoring page is used to monitor different functional functions of the upstream business database.
  • the first monitoring page is used to display the detection program.
  • the first monitoring page is used to display the detection program corresponding to the first monitoring page, whether each running state of the detected first monitoring page is normal, and the like.
  • the second monitoring page corresponding to the downstream business database it is detected whether the running state of the downstream business database is normal. It is understandable that if the operating status of the upstream and downstream business databases is normal, it indicates that the fault of the current backup database is mainly its own fault, and it is not necessary to repair the upstream and downstream business databases. Just keep checking; if the running status of the upstream and downstream business databases is abnormal, the administrator can be reminded to intervene by sending a short message, sounding an alarm to protect the page from shaking, etc.
  • the corresponding server will send a fault alarm to the upstream server and the downstream server.
  • the upstream server and the downstream server will start to actively monitor their corresponding The running status of each business database.
  • the administrator can be reminded to intervene by sending a short message, sounding an alarm to protect the page jitter, etc.; If the business database is running normally, the monitoring will be maintained until the server corresponding to the target standby database of the current node sends the information of returning to normal operation to the upstream server and the downstream server, and the monitoring is stopped.
  • Step 104 if both the upstream service database and the downstream service database are normal, perform a link recovery configuration operation on the upstream service database and the downstream service database.
  • the target standby database data is recovered using the primary database data.
  • Step 401 obtaining the failure time period of the target standby database
  • the server and the database will generate an operation log when any step of the operation is completed, and the operation log will record the operation time, operation object, and operation method of any step of the operation.
  • the failure time period of the target standby database can be understood as the time period after the failure of the target standby database is found through the operation log, or the time period when it is detected that the information received, processed, and sent by the target standby database does not meet the format requirements.
  • the time period during which the target standby database fails is obtained by retrieving the content of the operation log.
  • Step 402 sending a secondary synchronization instruction carrying the fault time period to the primary database corresponding to the target standby database;
  • Step 403 Acquire information corresponding to the failure time period sent by the primary database, and perform data synchronization restoration on the target standby database according to the information.
  • the secondary synchronization instruction can be understood as a type of instruction sent by the target standby database to the main database. After the class command is executed, the corresponding data will be retrieved according to the time period of the failure and the data identifier, and the data will be sent to the target standby database corresponding to the address identifier of the target standby database carried in the secondary synchronization instruction.
  • the target standby database sends a secondary synchronization instruction carrying information such as the failure period to the corresponding primary database, and the corresponding primary database receives the secondary synchronization instruction according to various information carried in the secondary synchronization instruction. , determine the data that needs to be delivered to the target standby database and deliver it. After the target standby database receives the delivered data, it will repair the corresponding data that needs to be repaired.
  • the above embodiment is based on the premise that the primary database corresponding to the target standby database does not fail.
  • the upstream service database can also be and data logs of the downstream business database and the target standby database to restore the target standby database. For example, if the data sent to the target standby database within the failure time period is found according to the data log of the upstream business database, the data can also be resent to the target standby database.
  • the data of the failed target standby database is repaired by using the data of the upstream business data that is working normally.
  • Step 501 obtaining upstream business data of an upstream business database
  • the target standby database performs backup processing according to the business data obtained from the upstream business database. Therefore, in order to determine whether the server corresponding to the target standby database successfully receives data from the upstream business database, the upstream business data of the upstream business database is obtained. , the upstream service data includes data sent from the upstream service database to the node corresponding to the target standby database.
  • Step 502 compare the upstream service data with the service data of the target standby database
  • the data sent by the upstream service data to the current node is backed up in the backup data. Therefore, whether the service link of the node corresponding to the upstream service data and the target backup database is normal or not can be determined by comparing the upstream service data with the target backup database. business data are compared.
  • Step 503 if the comparison results are inconsistent, clear the downstream service data, and copy the upstream service data.
  • Step 504 connect the upstream service database and the downstream service database according to the upstream node address identifier and the downstream node address identifier.
  • the comparison result is inconsistent, it indicates that the backup link of the node corresponding to the upstream service data and the target standby database is abnormal, and this abnormality will inevitably affect the node corresponding to the target standby database to send data to the downstream service database. Therefore, when the comparison results are inconsistent, the target standby database and the downstream service database are backed up again according to the upstream service data.
  • the upstream service data is re-acquired. Since the corresponding upstream service data needs to be sent to the target standby database for backup, the upstream service data is copied and the downstream service data is cleared at this time. The data of the node is backed up. Therefore, after clearing the downstream service data, the downstream node backs up the data of the upstream node again, realizing link recovery.
  • the upstream service database and the downstream service data path are connected according to the address identifier of the upstream node and the address identifier of the downstream node. Therefore, the upstream service data can be triggered to be resent from the upstream node to the corresponding downstream node, thereby realizing the data backup chain. Road recovery. Obviously, even if the target business database in the middle fails, data backup can be performed quickly.
  • the server address identifier contains both the corresponding upstream node address identifier and the corresponding downstream node address identifier, that is, the target database corresponding to the server address identifier is not the first node or the last node in the link, but
  • the database fault processing method of the embodiment of the present disclosure can connect the upstream service database and the downstream service data path according to the upstream node address identifier and the downstream node address identifier, and for the case of the intermediate node failure, Data can also be backed up from the upstream node to the downstream node.
  • the server address identifier when the server address identifier only includes the upstream node address identifier and does not include the downstream node address identifier, that is, when the faulty node is the last node of the service link, because other upstream nodes have backed up Therefore, the relevant services of the last node can be transferred to any of the other upstream nodes for execution.
  • the server address identifier when the server address identifier only includes the address identifier of the downstream node and does not include the address identifier of the upstream node, that is, when the faulty node is the first node of the service link, because other downstream nodes have already Relevant data is backed up, therefore, the relevant services of the first node can be transferred to any of the other downstream nodes for execution.
  • the server address identifier corresponding to the target standby database is obtained, and then the server address identifier corresponding to the target standby database is determined according to the server address identifier.
  • the upstream node address identifier and the downstream node address identifier and finally, it is detected whether the upstream service database corresponding to the upstream node address identifier is normal, and whether the downstream service database corresponding to the downstream node address identifier is normal, if both the upstream service database and the downstream service database are normal, Then, perform link recovery configuration operations on the upstream service database and the downstream service database.
  • the link is automatically restored according to the connection between the upstream service database and the downstream service database, so as to ensure the normality of the entire link, and avoid the failure of data backup to the downstream due to the interruption of the intermediate standby database. transmission.
  • the present application also proposes a database fault processing apparatus.
  • FIG. 6 is a schematic structural diagram of a database fault processing apparatus provided by an embodiment of the present application.
  • the database fault processing apparatus includes: an acquisition module 601 , a determination module 602 , a detection module 603 , and a repair module 604 .
  • the obtaining module 601 is used for detecting the failure of the target standby database, and obtaining the server address identifier corresponding to the target standby database;
  • a determination module 602 configured to determine the upstream node address identifier and the downstream node address identifier corresponding to the target standby database according to the server address identifier;
  • a detection module 603 configured to detect whether the upstream service database corresponding to the upstream node address identifier is normal, and to detect whether the downstream service database corresponding to the downstream node address identifier is normal;
  • the repair module 604 is configured to perform a link recovery configuration operation on the upstream service database and the downstream service database when both the upstream service database and the downstream service database are normal.
  • the obtaining module 601 is specifically used for:
  • the calculation result is the preset first identifier, it is determined that the failure of the target standby database is an application failure. If the calculation result is the preset first identifier If the second identifier is identified, it is determined that the failure of the target standby database is a server failure.
  • the detection module 603 is specifically used for:
  • the running state of the downstream service database is detected according to the preset second monitoring page.
  • the repair module 604 is specifically used for:
  • the upstream service database and the downstream service database are connected according to the upstream node address identification and the downstream node address identification.
  • the repair module 604 is specifically used for:
  • the server address identifier corresponding to the target standby database is obtained, and then the server address identifier corresponding to the target standby database is determined according to the server address identifier.
  • the upstream node address identifier and the downstream node address identifier and finally, it is detected whether the upstream service database corresponding to the upstream node address identifier is normal, and whether the downstream service database corresponding to the downstream node address identifier is normal, if both the upstream service database and the downstream service database are normal, Then, perform link recovery configuration operations on the upstream service database and the downstream service database.
  • the link is automatically restored according to the connection between the upstream service database and the downstream service database, so as to ensure the normality of the entire link, and avoid the failure of data backup to the downstream due to the interruption of the intermediate standby database. transmission.
  • the present application further provides a computer device, including: a processor, and a memory for storing instructions executable by the processor.
  • the processor is configured to implement the above-mentioned database fault handling method.
  • the present application also proposes a non-transitory computer-readable storage medium, when the instructions in the storage medium are executed by the computer device processor, the computer device can execute a database fault processing method.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with “first”, “second” may expressly or implicitly include at least one of that feature.
  • plurality means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.
  • the terms “installed”, “connected”, “connected”, “fixed” and other terms should be understood in a broad sense, for example, it may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, it can be the internal connection of two elements or the interaction relationship between the two elements, unless otherwise specified limit.
  • installed may be a fixed connection or a detachable connection , or integrated; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, it can be the internal connection of two elements or the interaction relationship between the two elements, unless otherwise specified limit.
  • a first feature "on” or “under” a second feature may be in direct contact with the first and second features, or the first and second features indirectly through an intermediary touch.
  • a first feature is “above”, “above” and “above” a second feature but the first feature is directly above or obliquely above the second feature, or simply means that the first feature is level higher than the second feature.
  • the first feature being “below”, “below” and “below” the second feature may mean that the first feature is directly below or obliquely below the second feature, or simply means that the first feature has a lower level than the second feature.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Hardware Redundancy (AREA)

Abstract

L'invention concerne un procédé et un appareil de gestion d'anomalies de bases de données. Le procédé de gestion d'anomalies de bases de données comprend les étapes consistant à : lorsqu'il est détecté qu'une anomalie s'est produite dans une base de données cible de secours, acquérir un identifiant d'adresse de serveur correspondant à la base de données cible de secours (101); selon l'identifiant d'adresse de serveur, déterminer un identifiant d'adresse de nœud en amont et un identifiant d'adresse de nœud en aval correspondant à la base de données cible de secours (102); détecter si une base de données de service en amont correspondant à l'identifiant d'adresse de nœud en amont et une base de données de service en aval correspondant à l'identifiant d'adresse de nœud en aval sont normales (103); et si à la fois la base de données de service en amont et la base de données de service en aval sont normales, effectuer une opération de configuration de récupération de liaison sur la base de données de service en amont et sur la base de données de service en aval (104). Ainsi, lorsqu'une anomalie se produit dans une base de données cible de secours, une liaison est automatiquement récupérée selon la communication entre une base de données de service en amont et une base de données de service en aval, ce qui assure la normalité de l'ensemble de la liaison en évitant la défaillance de transmission de sauvegarde de données en aval due à l'interruption d'une base de données intermédiaire de secours.
PCT/CN2021/113235 2020-10-27 2021-08-18 Procédé et appareil de gestion d'anomalies de bases de données WO2022088861A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011167074.6 2020-10-27
CN202011167074.6A CN114490565A (zh) 2020-10-27 2020-10-27 数据库故障处理方法和装置

Publications (1)

Publication Number Publication Date
WO2022088861A1 true WO2022088861A1 (fr) 2022-05-05

Family

ID=81381826

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/113235 WO2022088861A1 (fr) 2020-10-27 2021-08-18 Procédé et appareil de gestion d'anomalies de bases de données

Country Status (2)

Country Link
CN (1) CN114490565A (fr)
WO (1) WO2022088861A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514625A (zh) * 2022-09-23 2022-12-23 深信服科技股份有限公司 数据库集群管理方法、装置及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115001952B (zh) * 2022-05-25 2023-09-19 中移互联网有限公司 一种业务接口的故障定位方法及装置
CN116418600B (zh) * 2023-06-09 2023-08-15 安徽华云安科技有限公司 节点安全运维方法、装置、设备以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076126A2 (fr) * 1999-06-07 2000-12-14 Nortel Networks Limited Systeme et procede d'evitement de boucle dans une commutation par etiquette multiprotocole
CN1933423A (zh) * 2006-09-08 2007-03-21 华为技术有限公司 一种光网络lsp发生异常删除的恢复方法和装置
CN101192986A (zh) * 2006-11-28 2008-06-04 中兴通讯股份有限公司 一种自动交换光网络组播业务组播树的恢复方法
CN101945035A (zh) * 2009-07-10 2011-01-12 中兴通讯股份有限公司 基于路径计算元的跨域路径恢复方法和装置
CN105335245A (zh) * 2014-07-31 2016-02-17 华为技术有限公司 故障存储方法和装置、故障查找方法和装置
CN108897806A (zh) * 2018-06-15 2018-11-27 东软集团股份有限公司 数据一致性比对方法、装置、存储介质及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076126A2 (fr) * 1999-06-07 2000-12-14 Nortel Networks Limited Systeme et procede d'evitement de boucle dans une commutation par etiquette multiprotocole
CN1933423A (zh) * 2006-09-08 2007-03-21 华为技术有限公司 一种光网络lsp发生异常删除的恢复方法和装置
CN101192986A (zh) * 2006-11-28 2008-06-04 中兴通讯股份有限公司 一种自动交换光网络组播业务组播树的恢复方法
CN101945035A (zh) * 2009-07-10 2011-01-12 中兴通讯股份有限公司 基于路径计算元的跨域路径恢复方法和装置
CN105335245A (zh) * 2014-07-31 2016-02-17 华为技术有限公司 故障存储方法和装置、故障查找方法和装置
CN108897806A (zh) * 2018-06-15 2018-11-27 东软集团股份有限公司 数据一致性比对方法、装置、存储介质及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115514625A (zh) * 2022-09-23 2022-12-23 深信服科技股份有限公司 数据库集群管理方法、装置及系统

Also Published As

Publication number Publication date
CN114490565A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
WO2022088861A1 (fr) Procédé et appareil de gestion d'anomalies de bases de données
CN107291787B (zh) 主备数据库切换方法和装置
CN105933407B (zh) 一种实现Redis集群高可用的方法及系统
WO2016173179A1 (fr) Procédé et dispositif pour commutation de base de données primaire et de base de données secondaire
WO2021027481A1 (fr) Procédé de traitement de défaillance, appareil, dispositif informatique, support de stockage et système de stockage
WO2021136422A1 (fr) Procédé de gestion d'état, procédé de commutation de serveur d'application maître et de sauvegarde et dispositif électronique
US9164864B1 (en) Minimizing false negative and duplicate health monitoring alerts in a dual master shared nothing database appliance
CN110532278B (zh) 声明式的MySQL数据库系统高可用方法
CN113360579A (zh) 数据库高可用处理方法、装置、电子设备及存储介质
CN108243031B (zh) 一种双机热备的实现方法及装置
US10860411B2 (en) Automatically detecting time-of-fault bugs in cloud systems
CN109189854B (zh) 提供持续业务的方法及节点设备
CN113986450A (zh) 一种虚拟机备份方法及装置
CN112069018B (zh) 一种数据库高可用方法及系统
JP2006185108A (ja) ストレージシステムのデータを管理する管理計算機及びデータ管理方法
CN116185697B (zh) 容器集群管理方法、装置、系统、电子设备及存储介质
CN112948484A (zh) 分布式数据库系统和数据灾备演练方法
CN113596195B (zh) 公共ip地址管理方法、装置、主节点及存储介质
CN115686368A (zh) 区块链网络的节点的存储扩容的方法、系统、装置和介质
CN114328033A (zh) 保持高可用设备组业务配置一致性的方法及装置
CN110569303B (zh) 一种适用于多种云环境的MySQL应用层高可用系统及方法
JPH07183891A (ja) 計算機システム
US8713359B1 (en) Autonomous primary-mirror synchronized reset
JP3335779B2 (ja) プラント性能監視システム
CN117608919A (zh) 容灾方法、装置、计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884593

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884593

Country of ref document: EP

Kind code of ref document: A1