CN117873790A - Database cross-resource-pool disaster recovery system based on ELB - Google Patents

Database cross-resource-pool disaster recovery system based on ELB Download PDF

Info

Publication number
CN117873790A
CN117873790A CN202311648663.XA CN202311648663A CN117873790A CN 117873790 A CN117873790 A CN 117873790A CN 202311648663 A CN202311648663 A CN 202311648663A CN 117873790 A CN117873790 A CN 117873790A
Authority
CN
China
Prior art keywords
database
elb
dbproxy
cluster
disaster recovery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311648663.XA
Other languages
Chinese (zh)
Inventor
赵梦月
魏兴国
曾祥洲
袁艺文
朱碧青
尹志华
叶小朋
范郑乐
吕崇新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202311648663.XA priority Critical patent/CN117873790A/en
Publication of CN117873790A publication Critical patent/CN117873790A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses an ELB-based database disaster recovery system across resource pools, which comprises an ELB module, a DBproxy cluster, a database cluster, a agent module and a zookeeper module. Through setting up of a database disaster recovery system across resource pools based on ELB, a database middleware cluster across resource pools is adopted to be responsible for horizontally expanding the databases, the databases are expanded from a single database to a plurality of databases, the database middleware routes data access requests to one of the databases through a routing rule, a multi-AZ network architecture is provided and supported, the problem of service unavailability caused by single AZ is solved, a database switching selection principle for realizing database weight proportioning is provided, and meanwhile, the bottom database switching is realized based on unified scheduling of a zookeeper module deployed across the resource pools, so that when a local system encounters a disaster, the local system can be switched to a standby center in a different place in real time, uninterrupted operation of service is ensured, loss of data is avoided, and the safety of the data is improved.

Description

Database cross-resource-pool disaster recovery system based on ELB
Technical Field
The invention relates to the technical field of computers, in particular to a database disaster recovery system across resource pools based on ELB.
Background
After the resource pool fault occurs, most of the existing disaster recovery schemes are as follows:
1. database Leng Bei: backup the database once a day, save on tape or CD;
2. double-machine local hot standby: the shared disk array is used for RAID (redundant check), namely one data is stored on different disk arrays and is stored for a plurality of times, so that the bad disk is ensured not to influence the data reading and writing.
The scheme is simple, the normal operation of the business on the public cloud is easy to influence, and meanwhile, the data is easy to lose, so that in order to ensure that the local system is switched to a standby center in different places in real time when encountering faults, the uninterrupted operation of the business is ensured, the database disaster recovery system based on the ELB is provided.
Disclosure of Invention
The invention aims to provide an ELB-based database disaster recovery system across resource pools, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: an ELB-based database disaster recovery system across resource pools, comprising:
the ELB module is used for carrying out load balancing service and is also responsible for health detection of the data clusters crossing the resource pool;
a DBproxy cluster, wherein the DBproxy cluster is responsible for horizontally expanding a database;
the database cluster is responsible for carrying service traffic and synchronizing service traffic data in the cluster;
the agent module is responsible for monitoring the state of the database cluster;
and the zookeeper module is responsible for task scheduling across the resource pool by monitoring node information.
Preferably, the ELB module includes:
the load balancer receives incoming traffic from a client;
the monitor is used for checking the connection request from the client and forwarding the request;
and the back-end server receives the request of the monitor and can perform health check.
Preferably, in the DBProxy cluster, the DBProxy database middleware is independently deployed on 3AZ, and data on the zookeeper can be read and written, and multiple AZ database instances are connected through VIP.
Preferably, the database clusters support the deployment of the same city across AZ, support the VIP drifting across AZ, and when one AZ hangs down, the VIP can float to other AZ databases to continuously provide normal service. And when the Master on the AZ2 fails, the Master can be switched to the AZ1 according to the weight value by matching different weight values in a Master-two-slave architecture of the database.
Preferably, the agent module monitors state information in the database, and reports the state information to the AZ cluster in real time when the database node is unavailable.
Preferably, when the zookeeper cluster receives abnormal node information reported by the agent, a new owner is selected according to a scheduling principle, and when the AZ1 resource pool fails, the zookeeper on AZ2 or AZ3 is also selected as the new owner according to an available mechanism, so that judgment of a scheduling program is not affected.
Preferably, the 3AZ refers to that application programs and data are respectively deployed in three available areas in a cloud computing environment, and the 3AZ framework comprises the following procedures:
a decentralized architecture that makes each available region an independent data center, deploying different copies of an application at different AZs;
a network design that ensures stability of network connections between a plurality of available areas;
a data storage design that stores data between a plurality of available regions;
fault tolerant designs that enable the system to automatically detect and handle availability zone faults.
Preferably, the ELB module acts as a unified entry of the AZ resource pool dbproxy, performs IP convergence, and the user sends the service flow request to one of the dbproxy through load balancing.
Preferably, the dbproxy comprises the following scheme:
connection management, wherein dbproxy establishes connection with a client and maintains a group of connection with a database server;
request routing, wherein dbproxy requests routing for a database according to service requirements;
reading and writing are separated, the dbproxy forwards the request to a plurality of database servers, and the results are returned to the client after being combined;
load balancing, wherein dbproxy selects a database server with light load to process a request according to the load condition of the database server;
and the dbproxy can be automatically switched to a normal database server for use after fault recovery.
Preferably, the DBProxy cluster is extended from a single database to a plurality of databases, and the database middleware routes the access request of the data to one of the databases through a routing rule.
The invention has the technical effects and advantages that:
through setting up of a database disaster recovery system across resource pools based on ELB, a database middleware cluster across resource pools is adopted to be responsible for horizontally expanding the databases, the databases are expanded from a single database to a plurality of databases, the database middleware routes data access requests to one of the databases through a routing rule, a multi-AZ network architecture is provided and supported, the problem of service unavailability caused by single AZ is solved, a database switching selection principle for realizing database weight proportioning is provided, and meanwhile, the bottom database switching is realized based on unified scheduling of a zookeeper module deployed across the resource pools, so that when a local system encounters a disaster, the local system can be switched to a standby center in a different place in real time, uninterrupted operation of service is ensured, loss of data is avoided, and the safety of the data is improved.
Drawings
FIG. 1 is a diagram of a database disaster recovery module based on an ELB implementation.
FIG. 2 is a diagram of a database disaster recovery implementation based on an ELB implementation.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a database disaster recovery system based on ELB as shown in fig. 1-2, in which first embodiment;
comprising the following steps:
and the ELB module is used for load balancing service of the central node and is also responsible for health detection of the data cluster crossing the resource pool, and comprises:
a load balancer that accepts incoming traffic from a client and is capable of forwarding requests to a backend server in one or more available zones;
the monitor is used for checking the connection request from the client and forwarding the request, one or more monitors can be added in the ELB module, and the checked request is forwarded to a back-end server according to a defined allocation strategy;
the back-end server receives the requests of the monitors and can perform health check, the back-end server is bound with the monitors one by one, the back-end server uses a defined protocol and ports to forward the requests to one or more back-end servers, meanwhile, the back-end server can start health check functions to check the running condition configured by each back-end server, when an abnormal back-end server is found, the ELB module automatically distributes new requests to other healthy back-end servers, and when the abnormal back-end server returns to normal operation, the ELB module automatically returns the ELB module to a use state;
the ELB module acts as a unified inlet of the AZ resource pool dbproxy to perform IP convergence, and a user sends a service flow request to one of the dbproxy through load balancing.
dbproxy comprises the following scheme:
connection management, wherein dbproxy establishes connection with a client and maintains a group of connection with a database server, and when the client sends a database request, the dbproxy selects an appropriate database server and forwards the request to the database server;
request routing, wherein dbproxy requests the routing to the database according to service requirements, and the request can be determined to be forwarded to the corresponding database server according to the table name, the primary key value or other conditions of the request;
the method has the advantages that the reading and writing separation is performed, the dbproxy forwards the request to a plurality of database servers, the results are combined and returned to the client, the reading performance of the database can be improved through the reading and writing separation, and meanwhile the computing capacity of the plurality of database servers can be fully utilized;
the load balancing, the dbproxy selects one database server with light load to process the request according to the load condition of the database servers, and is beneficial to balancing the load of the database servers, so that overload of a certain database server is avoided, and performance degradation phenomenon occurs;
the fault recovery, the dbproxy can be automatically switched to a normal database server for use, and the dbproxy can synchronize the data recovered by the fault server to other servers, so that the consistency of the data can be realized;
the method comprises the steps that a DBproxy cluster is responsible for horizontally expanding databases, a single database is expanded to a plurality of databases in the DBproxy cluster, a database middleware routes a data access request to one of the databases through a routing rule, the DBproxy database middleware is independently deployed on 3AZ in the DBproxy cluster, data on a zookeeper can be read and written, multiple AZ database instances are connected through VIP, 3AZ means that an application program and the data are deployed in three available areas respectively in a cloud computing environment, and the 3AZ framework comprises the following procedures:
a decentralized architecture, the decentralized architecture enabling each available region to be an independent data center, deploying different copies of the application program at different AZs, the AZs having independent power supply systems and networks;
the network design can ensure the stability of network connection among a plurality of available areas, so that the network connection among the plurality of available areas can be quickly switched to the standby available area when needed, and the virtual private network is used for carrying out the network connection;
a data storage design that stores data between a plurality of available areas so that duplication and backup of the data can be ensured, the data being stored through a distributed file system;
the fault-tolerant design enables the system to automatically detect and process faults of the available areas, when one available area fails, the system needs to automatically forward a request to a standby available area, and a load equalizer is used for realizing forwarding and load balancing of the request;
the system comprises a database cluster, wherein the database cluster is responsible for carrying service traffic and synchronizing service traffic data in the cluster, the database clusters support the deployment of the same city across AZ, support the drifting of the VIP across AZ, and when one AZ is hung off, the VIP can float on other AZ databases to continuously provide normal service. When the Master on the AZ2 fails, the Master can be switched to the AZ1 according to the weight values by matching different weight values;
the agent module is responsible for monitoring the state of the database cluster, monitors the state information in the database, and reports the state information to the AZ cluster in real time when the database node is unavailable;
and the zookeeper module is responsible for task scheduling across the resource pool by monitoring node information, when the zookeeper cluster receives abnormal node information reported by the agent, a new owner is selected according to a scheduling principle, and when the AZ1 resource pool fails, the zookeeper on AZ2 or AZ3 can be selected as the new owner according to an available mechanism of the zookeeper module, so that judgment of a scheduling program is not influenced.
Embodiment two;
comprising the following steps:
and the ELB module is used for load balancing service of the central node and is also responsible for health detection of the data cluster crossing the resource pool, and comprises:
a load balancer that accepts incoming traffic from a client and is capable of forwarding requests to a backend server in one or more available zones;
the monitor is used for checking the connection request from the client and forwarding the request, one or more monitors can be added in the ELB module, and the checked request is forwarded to a back-end server according to a defined allocation strategy;
the back-end server receives the requests of the monitors and can perform health check, the back-end server is bound with the monitors one by one, the back-end server uses a defined protocol and ports to forward the requests to one or more back-end servers, meanwhile, the back-end server can start health check functions to check the running condition configured by each back-end server, when an abnormal back-end server is found, the ELB module automatically distributes new requests to other healthy back-end servers, and when the abnormal back-end server returns to normal operation, the ELB module automatically returns the ELB module to a use state;
the ELB module acts as a unified inlet of the AZ resource pool dbproxy to perform IP convergence, and a user sends a service flow request to one of the dbproxy through load balancing.
dbproxy comprises the following scheme:
connection management, wherein dbproxy establishes connection with a client and maintains a group of connection with a database server, and when the client sends a database request, the dbproxy selects an appropriate database server and forwards the request to the database server;
request routing, wherein dbproxy requests the routing to the database according to service requirements, and the request can be determined to be forwarded to the corresponding database server according to the table name, the primary key value or other conditions of the request;
the method has the advantages that the reading and writing separation is performed, the dbproxy forwards the request to a plurality of database servers, the results are combined and returned to the client, the reading performance of the database can be improved through the reading and writing separation, and meanwhile the computing capacity of the plurality of database servers can be fully utilized;
the load balancing, the dbproxy selects one database server with light load to process the request according to the load condition of the database servers, and is beneficial to balancing the load of the database servers, so that overload of a certain database server is avoided, and performance degradation phenomenon occurs;
the fault recovery, the dbproxy can be automatically switched to a normal database server for use, and the dbproxy can synchronize the data recovered by the fault server to other servers, so that the consistency of the data can be realized;
the method comprises the steps that a DBproxy cluster is responsible for horizontally expanding databases, a single database is expanded to a plurality of databases in the DBproxy cluster, a database middleware routes a data access request to one of the databases through a routing rule, the DBproxy database middleware is independently deployed on 3AZ in the DBproxy cluster, data on a zookeeper can be read and written, multiple AZ database instances are connected through VIP, 3AZ means that an application program and the data are deployed in three available areas respectively in a cloud computing environment, and the 3AZ framework comprises the following procedures:
a decentralized architecture, the decentralized architecture enabling each available region to be an independent data center, deploying different copies of the application program at different AZs, the AZs having independent power supply systems and networks;
the network design can ensure the stability of network connection among a plurality of available areas, so that the network connection among the plurality of available areas can be realized by using a special network by rapidly switching to the standby available area when needed;
a data storage design that stores data between a plurality of available areas so that duplication and backup of the data can be ensured, the data being stored by a database duplication system;
the fault-tolerant design enables the system to automatically detect and process faults of the available areas, when one available area fails, the system needs to automatically forward a request to a standby available area, and a load equalizer is used for realizing forwarding and load balancing of the request;
the system comprises a database cluster, wherein the database cluster is responsible for carrying service traffic and synchronizing service traffic data in the cluster, the database clusters support the deployment of the same city across AZ, support the drifting of the VIP across AZ, and when one AZ is hung off, the VIP can float on other AZ databases to continuously provide normal service. When the Master on the AZ2 fails, the Master can be switched to the AZ1 according to the weight values by matching different weight values;
the agent module is responsible for monitoring the state of the database cluster, monitors the state information in the database, and reports the state information to the AZ cluster in real time when the database node is unavailable;
and the zookeeper module is responsible for task scheduling across the resource pool by monitoring node information, when the zookeeper cluster receives abnormal node information reported by the agent, a new owner is selected according to a scheduling principle, and when the AZ1 resource pool fails, the zookeeper on AZ2 or AZ3 can be selected as the new owner according to an available mechanism of the zookeeper module, so that judgment of a scheduling program is not influenced.
In summary, health detection is carried out on distributed database clusters through ELB, the disaster tolerance of the same city across the available areas is realized, a master-slave architecture is adopted at the bottom layer, a database is deployed on 3AZ of the cross resources, under normal conditions, service flow is accessed through AZ1, the databases on AZ2 and AZ3 synchronize data on AZ1 in real time, when AZ1 is unavailable, the service flow is switched to AZ2 or AZ3 according to proportioning weight, service reliability is ensured, after switching is finished, the service flow accesses AZ2 or AZ3 disaster recovery application, service is continuously provided, a network framework with multiple AZ is provided, the unavailability of the service of single AZ is eliminated, the database switching selection principle realized by proportioning of the weight of the database is provided, the multi-copy storage of a data quart resource pool is realized, and the safety of the data is ensured.
Finally, it should be noted that: the foregoing description is only illustrative of the preferred embodiments of the present invention, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements or changes may be made without departing from the spirit and principles of the present invention.

Claims (10)

1. An ELB-based database disaster recovery system across resource pools, comprising:
the ELB module is used for carrying out load balancing service and is also responsible for health detection of the data clusters crossing the resource pool;
a DBproxy cluster, wherein the DBproxy cluster is responsible for horizontally expanding a database;
the database cluster is responsible for carrying service traffic and synchronizing service traffic data in the cluster;
the agent module is responsible for monitoring the state of the database cluster;
and the zookeeper module is responsible for task scheduling across the resource pool by monitoring node information.
2. The ELB-based database cross-resource pool disaster recovery system of claim 1, wherein the ELB module comprises:
the load balancer receives incoming traffic from a client;
the monitor is used for checking the connection request from the client and forwarding the request;
and the back-end server receives the request of the monitor and can perform health check.
3. The ELB-based database disaster recovery system of claim 1, wherein in the DBProxy cluster, DBProxy database middleware is independently deployed on 3AZ, and data on a zookeeper is read and written, and multiple AZ database instances are connected through VIP.
4. The ELB-based database disaster recovery system of claim 1, wherein the database clusters support co-city cross-AZ deployment, support cross-AZ VIP drifting, when one AZ hangs down, VIP can float to other AZ databases to continuously provide normal service, and a Master-slave architecture of the database can switch to AZ1 according to the weight value after Master on AZ2 fails by proportioning different weight values.
5. The ELB-based database disaster recovery system of claim 1, wherein the agent module monitors status information within the database and reports the status information to the AZ cluster in real time when a database node is unavailable.
6. The ELB-based database disaster recovery system according to claim 1, wherein when the zookeeper cluster receives abnormal node information reported by a agent, a new owner is selected according to a scheduling principle, and when an AZ1 resource pool fails, the zookeeper on AZ2 or AZ3 is also selected as the new owner according to an available mechanism, so that judgment of a scheduling program is not affected.
7. The ELB-based database disaster recovery system of claim 3, wherein the 3AZ refers to deploying application programs and data in three available areas respectively in a cloud computing environment, and the 3AZ framework comprises the following procedures:
a decentralized architecture that makes each available region an independent data center, deploying different copies of an application at different AZs;
a network design that ensures stability of network connections between a plurality of available areas;
a data storage design that stores data between a plurality of available regions;
fault tolerant designs that enable the system to automatically detect and handle availability zone faults.
8. The ELB-based database disaster recovery system of claim 1, wherein the ELB module acts as a unified portal for the AZ resource pool dbproxy, performs IP convergence, and the user sends the service traffic request to one of the dbproxy through load balancing.
9. The ELB-based database disaster recovery system of claim 8, wherein the dbproxy comprises the following procedures:
connection management, wherein dbproxy establishes connection with a client and maintains a group of connection with a database server;
request routing, wherein dbproxy requests routing for a database according to service requirements;
reading and writing are separated, the dbproxy forwards the request to a plurality of database servers, and the results are returned to the client after being combined;
load balancing, wherein dbproxy selects a database server with light load to process a request according to the load condition of the database server;
and the dbproxy can be automatically switched to a normal database server for use after fault recovery.
10. The ELB-based database disaster recovery system of claim 1, wherein the DBProxy cluster extends from a single database to multiple databases, and wherein the database middleware routes access requests for data to one of the databases via routing rules.
CN202311648663.XA 2023-12-05 2023-12-05 Database cross-resource-pool disaster recovery system based on ELB Pending CN117873790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311648663.XA CN117873790A (en) 2023-12-05 2023-12-05 Database cross-resource-pool disaster recovery system based on ELB

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311648663.XA CN117873790A (en) 2023-12-05 2023-12-05 Database cross-resource-pool disaster recovery system based on ELB

Publications (1)

Publication Number Publication Date
CN117873790A true CN117873790A (en) 2024-04-12

Family

ID=90595565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311648663.XA Pending CN117873790A (en) 2023-12-05 2023-12-05 Database cross-resource-pool disaster recovery system based on ELB

Country Status (1)

Country Link
CN (1) CN117873790A (en)

Similar Documents

Publication Publication Date Title
CN1554055B (en) High-availability cluster virtual server system
WO2018153218A1 (en) Resource processing method, related apparatus and communication system
US10719417B2 (en) Data protection cluster system supporting multiple data tiers
CN107707393B (en) Multi-active system based on Openstack O version characteristics
CN106341454B (en) Across computer room distributed data base management system (DDBMS) mostly living and method
WO2019085875A1 (en) Configuration modification method for storage cluster, storage cluster and computer system
EP1963985B1 (en) System and method for enabling site failover in an application server environment
US6609213B1 (en) Cluster-based system and method of recovery from server failures
CN106506588A (en) How polycentric data center's dual-active method and system
US9280428B2 (en) Method for designing a hyper-visor cluster that does not require a shared storage device
US8688773B2 (en) System and method for dynamically enabling an application for business continuity
CN109819004B (en) Method and system for deploying multi-activity data centers
CN112003716A (en) Data center dual-activity implementation method
CN110912991A (en) Super-fusion-based high-availability implementation method for double nodes
JP2008107896A (en) Physical resource control management system, physical resource control management method and physical resource control management program
CN110175089A (en) A kind of dual-active disaster recovery and backup systems with read and write abruption function
CN111800484B (en) Service anti-destruction replacing method for mobile edge information service system
CN111431980B (en) Distributed storage system and path switching method thereof
CN114338670B (en) Edge cloud platform and network-connected traffic three-level cloud control platform with same
CN105959145B (en) A kind of method and system for the concurrent management server being applicable in high availability cluster
CN113407382B (en) Dynamic regulation and control method and system for service fault
US20110191626A1 (en) Fault-tolerant network management system
CN117873790A (en) Database cross-resource-pool disaster recovery system based on ELB
CN116974816A (en) Disaster backup system for remote double-node service
CN111367711A (en) Safety disaster recovery method based on super fusion data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination