CN115510167A - Distributed database system and electronic equipment - Google Patents

Distributed database system and electronic equipment Download PDF

Info

Publication number
CN115510167A
CN115510167A CN202211472755.2A CN202211472755A CN115510167A CN 115510167 A CN115510167 A CN 115510167A CN 202211472755 A CN202211472755 A CN 202211472755A CN 115510167 A CN115510167 A CN 115510167A
Authority
CN
China
Prior art keywords
pod
database
master
slave
distributed database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211472755.2A
Other languages
Chinese (zh)
Other versions
CN115510167B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anchao Cloud Software Co Ltd
Original Assignee
Anchao Cloud Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anchao Cloud Software Co Ltd filed Critical Anchao Cloud Software Co Ltd
Priority to CN202211472755.2A priority Critical patent/CN115510167B/en
Publication of CN115510167A publication Critical patent/CN115510167A/en
Application granted granted Critical
Publication of CN115510167B publication Critical patent/CN115510167B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a distributed database system and electronic equipment, wherein the distributed database system responds to a database access request initiated by a user and comprises the following steps: the system comprises a main controller, a Stateful object controlled by the main controller, a PDB object and a proxy server; the Stateful object generates a plurality of Pods to form a Pod cluster, the PDB object limits the threshold range of the quantity of the Pods in the Pod cluster, a reference relation is formed between the Stateful object and the PDB object, the threshold range is obtained based on the reference relation, and the quantity of the Pods is adjusted to be within the threshold range; and the proxy server carries out management request and service request forwarding on the Pod cluster. The invention realizes the maintenance and management of the distributed database without depending on an external system, and simultaneously can avoid a series of problems caused by the chaos of the environment control object.

Description

Distributed database system and electronic equipment
Technical Field
The invention relates to the technical field of databases, in particular to a distributed database system and electronic equipment.
Background
"orchestration" refers primarily to the process of how a user, through some tools or configurations, performs the work of defining, configuring, creating, deleting, etc. a set of virtual machines and associated resources, which are executed by the cloud computing platform according to these specified logics. "Container organization" refers to a tool that defines the specification of container organization and management, typically Kubernets. Kubernetes is a container arrangement tool that can be used as an infrastructure building application system. The applications and the application data contained in the established application system are in a split state, and the applications and the application data cannot be uniformly managed; meanwhile, a communication network needs to be established between the application and the application data to realize communication, so that unification of the application and the application data through database migration is an urgent need.
During the migration process of the database, the fault tolerance is generally realized by means of a distributed database. By "fault tolerant" technology, it is meant that the system is guaranteed to function properly when some component fails or malfunctions, i.e., components can be removed and the system should continue to operate as intended. The distributed database provides services externally in a master-slave mode, and the master-slave copy function provided by the database can realize multiple backups of data. And multiple databases formed by one master and multiple slaves need to manage and maintain multiple servers (i.e., independent servers where the separate databases are located) at the same time. Meanwhile, when a plurality of servers work cooperatively, other problems of the distributed database, such as network failure or split brain, may also occur. Therefore, there is a need for a logical convergence of coordinating the operation of multiple servers, databases, and the logic of data replication in a simple and consistent manner.
The maintenance of the distributed database disclosed in the prior art needs to rely on an external system (e.g., a monitoring system), and this method establishes a communication network between the external system and the system in which the distributed database is located, which increases the probability of system errors. The other method utilizes the cloud native technology, but does not realize database replication by the group replication technology, so that the defect of insufficient performance exists; meanwhile, the monitoring index needs to be realized by operation and maintenance experience, and the defect of insufficient universality exists.
In view of the above, there is a need for an improved distributed database system in the prior art to solve the above problems.
Disclosure of Invention
The invention aims to disclose a distributed database system and electronic equipment, which are used for solving the defects that maintenance and management of a distributed database in the prior art need to depend on an external system, the performance is insufficient and the universality is insufficient.
In order to achieve one of the above objects, the present invention provides a distributed database system, which responds to a database access request initiated by a user;
the distributed database system includes: the system comprises a main controller, a Stateful object, a PDB object and a proxy server, wherein the Stateful object, the PDB object and the proxy server are controlled by the main controller;
the Stateful object generates a plurality of Pods to form a Pod cluster, the PDB object limits a threshold range of the quantity of the Pods in the Pod cluster, a reference relation is formed between the Stateful object and the PDB object, the threshold range is obtained based on the reference relation, and the quantity of the Pods is adjusted to be within the threshold range; the proxy server carries out management request and service request forwarding on the Pod cluster;
the proxy server deploys a service proxy, and the main controller receives a service request and forwards the service request to the service proxy so as to forward the service request to the Pod cluster through the service proxy;
the service agent includes: the master agent responds to the data writing request and the data reading request forwarded by the master controller, and the slave agent only responds to the data reading request forwarded by the master controller, wherein the master agent and the slave agent are only exposed to the master controller;
wherein the database container is deployed independently within the Pod.
As a further improvement of the present invention, the distributed database system is constructed based on kubernets deploying deletion policies, so as to synchronously execute deletion operations on the stateful object and the PDB object based on the deletion policies.
As a further improvement of the present invention, when the PDB object receives an voluntary interrupt event occurring in the distributed database system, a maintenance policy is generated by the PDB object to perform maintenance based on the maintenance policy; the voluntary interruption event includes: and the method comprises one or any combination of Pod maintenance, upgrading events, node deletion events or Pod deletion events.
As a further improvement of the present invention, the proxy server deploys a management agent and exposes it only to the host controller, receives a management request through the management agent, creates a unique access name for each Pod by a domain name system, and monitors the Pod to obtain and record status information of each Pod.
As a further improvement of the present invention, an Init container and a tag controller are deployed in each Pod, the Init container initializes database containers in the same Pod to form a state of independently providing database services, a master database container and a plurality of slave database containers are determined between the database containers respectively deployed by a plurality of pods based on a master-slave determination policy, and the tag controller reads metadata information of the database container in the Pod to which the tag controller belongs to identify role information of the Pod in which the database container is located based on the metadata information.
As a further improvement of the present invention, the master agent determines a master database container according to the role information and forwards the data write request and the data read request to the master database container determined by the master-slave determination policy, and the slave agent determines a slave database container according to the role information and forwards the data read request to the slave database container determined based on the master-slave determination policy.
As a further improvement of the present invention, if a master database container currently providing database services is down, one slave database container is selected from the plurality of slave database containers as the master database container based on the master-slave determination policy.
As a further improvement of the present invention, the distributed database system operates independently in a computing device, which is a physical machine or a virtual machine cluster.
Based on the same inventive concept, the present invention also provides an electronic device, comprising:
a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor;
the processor, when executing the computer program, executes logic contained in a distributed database system as claimed in any one of the above inventions.
Compared with the prior art, the invention has the beneficial effects that:
by the invention, the PDB object is introduced, so that the high availability of the Pod cluster is realized, the Pod number in the Pod cluster is ensured to be within the threshold range, and the capacity of the Kubernets to manage the database container is expanded. Meanwhile, through the establishment of the reference relationship between the Statefmelset object and the PDB object, the Statefmelset object is prevented from being deleted randomly, a series of problems caused by the chaos of the environment control object are avoided, for example, a large amount of left resources and residual files are generated, and the waste of resources in the distributed database system is effectively prevented. The containerization of the database solidifies the fussy maintenance operation steps of the distributed database, and reduces the requirements of operation and maintenance personnel on the maintenance skills of the distributed database. The management agent monitors the Pod to obtain the state information of each Pod, and dependence on an external system is not required, so that the probability of errors caused by the maintenance operation performed on the distributed database system is reduced.
Drawings
FIG. 1 is an overall topology of a distributed database system shown in the present invention;
FIG. 2 is a topology diagram of a proxy server;
FIG. 3 is a topology diagram of a service broker;
fig. 4 is a topology diagram of an electronic device according to the present invention.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Referring to fig. 1 to fig. 3, an embodiment of a distributed database system is shown.
The application scenario of the distributed database system disclosed by the invention is that a set of high-availability clusters are deployed after database containerization to realize the management of the database container, so that human errors and repeated labor are reduced through systematic operation, the resource use is centralized for management, and server resources are effectively utilized, so that the defects of dependence on an external system, insufficient performance and insufficient universality existing in the maintenance and management of the distributed database in the prior art are overcome. The distributed database system shown in the present invention operates independently in a computing device (e.g., the computer system 1 shown in fig. 1) that is a physical machine or a cluster of virtual machines. In the present embodiment, the distributed database system is exemplarily illustrated as being deployed in the computer system 1.
Referring to fig. 1, a distributed database system is responsive to a database access request initiated by a user; the database access request refers to any one of a data write request and a data read request. A distributed database system comprising: a master controller 10, a stateful object 21 controlled by the master controller 10, a PDB object 22, and a proxy server 2.
The stateful object 21 generates a plurality of Pod to form a Pod cluster, the PDB object 22 limits a threshold range of Pod numbers in the Pod cluster, a reference relationship is formed between the stateful object 21 and the PDB object 22, the threshold range is obtained based on the reference relationship, the Pod numbers in the Pod cluster are adjusted to be within the threshold range, and the proxy server 2 forwards a management request and a service request (i.e., a data write request and a data read request) to the Pod cluster. Wherein the database container is deployed independently within the Pod. Wherein the PDB object 22 is an interrupt budget (poddiscription budget); the stateful object 21, as the name implies, "stateful collection" is used to manage all stateful services, e.g. MySQL, mongoDB clusters, etc.
Specifically, after a plurality of independent databases are containerized, independent database containers (namely MySQL-1 to MySQL-n, wherein n is a positive integer greater than or equal to 2) are formed, and the database containers are in mutually independent relation and can independently provide database services for users. Meanwhile, a master database container and a plurality of slave database containers can be customized and determined among the database containers based on a master-slave determination strategy. The main database container is used for providing data writing service and data reading service for the outside, and the slave database container is used for providing data reading service for the outside. Of course, a master database container and several slave database containers may be determined by a user, and preferably, a master database container and several slave database containers are determined by a user between several independent database containers based on a master-slave determination strategy, so that the operation based on systematization does not need manual operation of the user to achieve the purpose of reducing human errors and repeated labor.
The Stateful object 21 creates a plurality of Pods (i.e., pod-1 to Pod-n, where n is a positive integer greater than or equal to 2) to form a Pod cluster, and deploys the database containers into the Pods independently, i.e., one database container is deployed in each Pod. For example, as shown in FIG. 1, mySQL-1 is deployed to Pod-1, and by analogy, mySQL-n is deployed to Pod-n.
And determining a database container and a plurality of slave database containers (namely MySQL-1 to MySQL-n) based on a master-slave determination strategy. The master-slave determination policy may be configured by logic issued by a user through customization, or may be pre-configured by the distributed database system and determined by selection of the user, which is not limited in this embodiment. The specific content of the master-slave determination policy may be, for example, that the first generated database container is used as a master database container, or that the highest configured database container is used as a master database container, as long as only one master database container and several slave database containers in the distributed database system can be ensured. When the main database container is removed or crashed to cause that the main database container cannot continuously provide data writing service or data reading service, one slave database container is selected from the plurality of slave database containers to serve as the main database container based on a master-slave determination strategy to ensure that the main database container provides data writing service or data reading service for the outside without interruption, thereby achieving the purpose of ensuring user experience.
Referring to fig. 2, the proxy server 2 deploys a management agent 23 and a service agent 24. The management agent 23 is exposed only to the host controller 10, i.e., is controlled only by the host controller 10. Specifically, the management agent 23 receives a management request issued by a user and creates a unique access name for each Pod by the domain name system, so as to realize unique positioning of the Pod. And initializing scripts inside init containers deployed in the Pod, respectively accessing the database containers deployed in the Pod, initializing and starting the database containers to form a state of independently providing database services. The database container after initialization and startup can normally provide database services (e.g., the aforementioned data writing service and data reading service) to the outside. For example, the init container-1 deployed by Pod-1 internally performs an initialization script, and accesses MySQL-1 to initialize and start MySQL-1, and so on. After determining a master database container and a plurality of slave database containers among the database containers respectively deployed by a plurality of Pods based on a master-slave determination strategy, a tag controller deployed in a Pod reads metadata information of the database container in the Pod to which the tag controller belongs so as to identify role information of the Pod where the database container is located based on the metadata information. For example, in Pod-1, the tag controller-1 reads MySQL-1's metadata table-1 to determine whether MySQL-1 is the master or slave database container based on the metadata table-1. If MySQL-1 is the main database container, the role information of the identification Pod-1 is the main; if MySQL-1 is the slave database container, the role information of the identification Pod-1 is the slave, and so on.
It should be noted that the management agent 23 is a Governing Service, and is controlled by the main controller 10, and is configured to receive a management request issued by a user, create a unique access name for each Pod by a domain name system, and monitor the Pod to obtain and record state information of each Pod. Since the record corresponding to the Pod is generated only when the Pod enters the ready state in the default environment, for the requirement of stable management, "publicnotreadyaddress = true" is set to cancel the restriction, thereby ensuring that all pods (i.e., pods entering the ready state and pods not entering the ready state) generate the record corresponding to the Pod. The init container is one of three containers of the Pod and is used for running an initialization task to guarantee a running environment in the container. Therefore, the database container is initialized and started through the init container to ensure that the database container can normally provide a database service (i.e., a data writing service or a data reading service) to the outside. The tag controller is a Labelcontroller, is deployed in the Pod, and is used for reading metadata information of a database container deployed in the same Pod, so as to identify the role information of the Pod based on the metadata information. Because in the kubernets environment, the semantics provided by the stateful object 21 ensures that only one Pod with the same name exists in the Pod cluster, that is, if the downtime occurs, the stateful object 21 does not perform the failover. Kubernetes will simply reschedule the generation of a Pod, but the master-slave relationship between the original database containers may be disorganized. Therefore, a tag controller is deployed in the Pod to identify the role information of the Pod, so as to inform the external business role of the Pod, thereby preventing disorder of master-slave relationship.
Referring to fig. 3, the service agent 24 includes a master agent 241 and a slave agent 242. The master agent 241 and the slave agent 242 are exposed only to the master controller 10, i.e., are controlled by the master controller 10. The host controller 10 receives the service request and forwards it to the service proxy 24 for forwarding the service request to the Pod cluster through the service proxy 24. The master agent 241 is configured to respond to a data write request and a data read request forwarded by the master controller 10, and the slave agent 242 is configured to respond to a data read request forwarded by the master controller 10. After deploying the tag controllers in the pods and identifying role information for the respective Pod through the tag controllers, the master agent 241 determines a master database container according to the role information (i.e., determines a master Pod according to the role information, thereby determining the master database container deployed in the master Pod), and forwards the data write request and the data read request to the master database container to provide a data write service and a data read service through the master database container; similarly, the slave agent 242 determines a slave database container according to the role information, and forwards the data read request to the slave database container to provide the data read service by the slave database container. The master agent 241 is a Primary Service, and the slave agent 242 is a Standby Service.
Referring to fig. 1, a reference relationship is formed between the stateful object 21 and the PDB object 22. The stateful object 21 creates a Pod to form a Pod cluster. Upon receiving an voluntary interruption event from the distributed database system, PDB object 22 generates a maintenance policy by PDB object 22 for maintenance based on the maintenance policy. Since Pod clusters need to remain highly available, the number of nodes needs to remain as stable as possible. If a downtime occurs, a set of mechanisms are required to handle the situation, and therefore the voluntary interruption is handled by the PDB object 22. Voluntary interruption events include: and the method comprises one or any combination of Pod maintenance, upgrading events, node deletion events or Pod deletion events. For example, draining nodes for repairs or upgrades, draining nodes from a cluster to shrink nodes, removing a Pod from a node to allow other pods to use the node, deleting a controller that manages the Pod, updating Pod templates resulting in a Pod restart, accidentally deleting a Pod, and the like.
PDB object 22 limits the threshold range of the number of Pod in the Pod cluster. The PDB object 22 includes two key parameters ". Spec.minavailable" and ". Spec.maxunavailable". The parameter ". Spec.minavailable" represents the minimum available Pod number or proportion that needs to be ensured in the process of generating the voluntary interruption, and the parameter ". Spec.maxunavailable" represents the maximum available Pod number or proportion that needs to be ensured in the process of generating the voluntary interruption. Therefore, by establishing a reference relationship between the stateful object 21 and the PDB object 22, the stateful object 21 obtains a threshold range based on the reference relationship to adjust the Pod number to be within the threshold range (for example, if the Pod number is smaller than the threshold range, a Pod is created, and a database container is independently deployed in the Pod, etc.), so as to ensure high availability of the database container contained in the Pod cluster, and improve the capability of the kubernets to manage the database container. In addition, after the reference relationship is formed between the stateful object 21 and the PDB object 22, the stateful object is also used for dealing with the deletion operation of kubernets, so that the situation of random deletion is prevented, and a series of problems caused by confusion of an environment control object are avoided. After the stateful object 21 is deleted, the PDB object 22 for limiting the number of Pod has no effect, so that the PDB object is also deleted through the reference relationship, and the stateful object 21 is prevented from being deleted independently, thereby achieving the purpose of preventing a large amount of left resources and residual files from being generated. The reference relationship of the Statefulset object 21 and the PDB object 22 is established (i.e. the reference relationship of the OwnerReference is established). Therefore, the distributed database system is constructed based on Kubernets deploying deletion strategies to synchronously execute deletion operations on the Stateffelset object 21 and the PDB object 22 based on the reference relations based on the deletion strategies.
Through the distributed database system disclosed by the invention, the high availability of the Pod cluster is realized by referring to the PDB object 22, the Pod number in the Pod cluster is ensured to be within the threshold range, so that the high availability of the database container contained in the Pod cluster is ensured, and meanwhile, the capability of Kubernetes for managing the database container is expanded. In addition, through the establishment of the reference relationship between the Statefmelset object 21 and the PDB object 22, the Statefmelset object 21 is prevented from being deleted randomly, a series of problems caused by the chaos of the environment control object are avoided, for example, a large amount of left resources and residual files are generated, and the waste of resources in the distributed database system is effectively prevented. The containerization of the database solidifies the maintenance operation steps of the distributed database, and reduces the requirement of operation and maintenance personnel on the maintenance skill of the distributed database. Through monitoring the Pod by the management agent 23 to obtain the state information of each Pod, dependence on an external system is not required, and thus the error probability caused by maintenance operation performed on the distributed database system is reduced.
Based on the technical solutions included in the embodiments of the distributed database system disclosed in the foregoing embodiments, this embodiment also discloses a specific implementation of the electronic device 500.
Referring to fig. 4, the present embodiment discloses an electronic device 500, including: a processor 51, a memory 52 and a computer program stored in the memory 52 and configured to be executed by the processor 51, the processor 51 when executing the computer program performing the steps of a containerized database deployment method according to the preceding embodiments. Specifically, the memory 52 comprises a plurality of memory cells, namely memory cells 521-52 j, wherein the parameter i is a positive integer greater than or equal to two. The processor 51 and the memory 52 each have access to a system bus 53. The form of the system bus 53 is not requiredWith specific limitations, I 2 The C bus, the SPI bus, the SCI bus, the PCI-e bus, the ISA bus, etc., and may be changed reasonably according to the specific type and application scenario requirements of the electronic device 500. The system bus 53 is not the point of the invention of the present application and is not set forth herein.
It should be noted that the storage unit 52 in this embodiment may be a physical storage unit, so that the electronic device 500 is understood as a physical computer or a computer cluster or a cluster server; meanwhile, the storage unit 52 may also be a virtual storage unit, for example, a virtual storage space formed by a bottom layer virtualization technology based on a physical storage device, so as to configure the electronic device 500 as a virtual device such as a virtual server or a virtual cluster, or to understand the electronic device 500 as a PC, a tablet computer, a smartphone, a smart wearable electronic device, a physical cluster or a data center.
The electronic device 500 shown in the present embodiment has the same technical solutions as those in the previous embodiments, please refer to the previous embodiments, and the description thereof is omitted here.
The various illustrative logical blocks, or elements, described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present specification describes embodiments, not every embodiment includes only a single embodiment, and such description is for clarity purposes only, and it is to be understood that all embodiments may be combined as appropriate by one of ordinary skill in the art to form other embodiments as will be apparent to those of skill in the art from the description herein.

Claims (9)

1. A distributed database system responds to a database access request initiated by a user;
wherein the distributed database system comprises: the system comprises a main controller, a Stateful object, a PDB object and a proxy server, wherein the Stateful object, the PDB object and the proxy server are controlled by the main controller;
the Stateful object generates a plurality of Pod to form a Pod cluster, the PDB object limits a threshold range of Pod quantity in the Pod cluster, a reference relation is formed between the Stateful object and the PDB object, the threshold range is obtained based on the reference relation, and the Pod quantity is adjusted to be within the threshold range; the proxy server carries out management request and service request forwarding on the Pod cluster;
the proxy server deploys a service proxy, and the main controller receives a service request and forwards the service request to the service proxy so as to forward the service request to the Pod cluster through the service proxy;
the service agent comprises: the master agent responds to the data writing request and the data reading request forwarded by the master controller, and the slave agent only responds to the data reading request forwarded by the master controller, wherein the master agent and the slave agent are only exposed to the master controller;
wherein the database containers are independently deployed within the Pod.
2. The distributed database system of claim 1, wherein the distributed database system is configured based on Kubernets deploying deletion policies to synchronously perform deletion operations on the StatefUlset object and the PDB object based on the deletion policies.
3. The distributed database system of claim 2, wherein the PDB object generates a maintenance policy for maintenance based on the maintenance policy by the PDB object upon receiving an occurrence of a voluntary interruption event by the distributed database system; the voluntary interruption event includes: and the method comprises one or any combination of Pod maintenance, upgrading events, node deletion events or Pod deletion events.
4. The distributed database system of claim 1, wherein the proxy server deploys a management agent and exposes it only to the host controller, receives a management request through the management agent and creates a unique access name for each Pod by the domain name system, and monitors the Pod for status information of each Pod and records it.
5. The distributed database system of claim 4, wherein an Init container and a tag controller are deployed in each Pod, the Init container initializes database containers in the same Pod to form a state of independently providing database services, the database containers respectively deployed by the pods determine one master database container and a plurality of slave database containers based on a master-slave determination policy, and the tag controller reads metadata information of the database container in the Pod to which the tag controller belongs to identify the role information of the Pod in which the database container is located based on the metadata information.
6. The distributed database system of claim 5, wherein the master agent determines a master database container from the role information and forwards the data write request and data read request to the master database container determined by the master-slave determination policy, and wherein the slave agent determines a slave database container from the role information and forwards the data read request to the slave database container determined based on the master-slave determination policy.
7. The distributed database system of claim 5, wherein if a master database container currently providing database services is down, one slave database container is selected from the plurality of slave database containers as the master database container based on the master-slave determination policy.
8. The distributed database system of any of claims 1-7, wherein the distributed database system runs independently in one computing device, the computing device being a physical machine or a cluster of virtual machines.
9. An electronic device, comprising:
a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor;
the processor, when executing the computer program, executes the logic contained in the distributed database system of any of claims 1-8.
CN202211472755.2A 2022-11-23 2022-11-23 Distributed database system and electronic equipment Active CN115510167B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211472755.2A CN115510167B (en) 2022-11-23 2022-11-23 Distributed database system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211472755.2A CN115510167B (en) 2022-11-23 2022-11-23 Distributed database system and electronic equipment

Publications (2)

Publication Number Publication Date
CN115510167A true CN115510167A (en) 2022-12-23
CN115510167B CN115510167B (en) 2023-05-23

Family

ID=84513945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211472755.2A Active CN115510167B (en) 2022-11-23 2022-11-23 Distributed database system and electronic equipment

Country Status (1)

Country Link
CN (1) CN115510167B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2250608A1 (en) * 1997-10-31 1999-04-30 Sun Microsystems, Inc. Distributed system and method for controlling access control to network resources and event notifications
CN111651275A (en) * 2020-06-04 2020-09-11 山东汇贸电子口岸有限公司 MySQL cluster automatic deployment system and method
CN113297031A (en) * 2021-05-08 2021-08-24 阿里巴巴新加坡控股有限公司 Container group protection method and device in container cluster
CN114239055A (en) * 2021-11-29 2022-03-25 浪潮云信息技术股份公司 Distributed database multi-tenant isolation method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2250608A1 (en) * 1997-10-31 1999-04-30 Sun Microsystems, Inc. Distributed system and method for controlling access control to network resources and event notifications
CN111651275A (en) * 2020-06-04 2020-09-11 山东汇贸电子口岸有限公司 MySQL cluster automatic deployment system and method
CN113297031A (en) * 2021-05-08 2021-08-24 阿里巴巴新加坡控股有限公司 Container group protection method and device in container cluster
CN114239055A (en) * 2021-11-29 2022-03-25 浪潮云信息技术股份公司 Distributed database multi-tenant isolation method and system

Also Published As

Publication number Publication date
CN115510167B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN102103518B (en) System for managing resources in virtual environment and implementation method thereof
US20080281959A1 (en) Managing addition and removal of nodes in a network
EP4083786A1 (en) Cloud operating system management method and apparatus, server, management system, and medium
CN108270726B (en) Application instance deployment method and device
CN109656742B (en) Node exception handling method and device and storage medium
EP3745269B1 (en) Hierarchical fault tolerance in system storage
US9792150B1 (en) Detecting site change for migrated virtual machines
CN106657167B (en) Management server, server cluster, and management method
CN113204353B (en) Big data platform assembly deployment method and device
WO2021082465A1 (en) Method for ensuring data consistency and related device
US20090044186A1 (en) System and method for implementation of java ais api
CN114840223A (en) Resource processing method and device
US11533391B2 (en) State replication, allocation and failover in stream processing
CN111147274A (en) System and method for creating a highly available arbitration set for a cluster solution
US8621260B1 (en) Site-level sub-cluster dependencies
CN111488247B (en) High availability method and equipment for managing and controlling multiple fault tolerance of nodes
US11487528B1 (en) Managing system upgrades in a network computing environment
CN115510167A (en) Distributed database system and electronic equipment
CN109189444A (en) A kind of upgrade control method and device of the management node of server virtualization system
CN116166413A (en) Lifecycle management for workloads on heterogeneous infrastructure
WO2016046951A1 (en) Computer system and file management method therefor
US11675678B1 (en) Managing storage domains, service tiers, and failed service tiers
US11663096B1 (en) Managing storage domains, service tiers and failed storage domain
US20230305876A1 (en) Managing storage domains, service tiers, and failed servers
US11271999B2 (en) Flexible associativity in multitenant clustered environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant