CN116737810B - Consensus service interface for distributed time sequence database - Google Patents

Consensus service interface for distributed time sequence database Download PDF

Info

Publication number
CN116737810B
CN116737810B CN202310503870.XA CN202310503870A CN116737810B CN 116737810 B CN116737810 B CN 116737810B CN 202310503870 A CN202310503870 A CN 202310503870A CN 116737810 B CN116737810 B CN 116737810B
Authority
CN
China
Prior art keywords
consensus
interface
user data
distributed
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310503870.XA
Other languages
Chinese (zh)
Other versions
CN116737810A (en
Inventor
王建民
黄向东
乔嘉林
张金瑞
田原
谭新宇
宋韶旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmou Technology Beijing Co ltd
Tsinghua University
Original Assignee
Tianmou Technology Beijing Co ltd
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmou Technology Beijing Co ltd, Tsinghua University filed Critical Tianmou Technology Beijing Co ltd
Priority to CN202310503870.XA priority Critical patent/CN116737810B/en
Publication of CN116737810A publication Critical patent/CN116737810A/en
Application granted granted Critical
Publication of CN116737810B publication Critical patent/CN116737810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24553Query execution of query operations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a consensus service interface for a distributed time sequence database, which mainly comprises a creation interface, an addition and deletion consensus group interface and a read-write interface; the method comprises the steps of establishing an interface for accessing a consensus algorithm appointed by an upper layer; the adding and deleting consensus group interface is used for creating/deleting the consensus group for managing the user data, and the reading and writing interface is used for writing/reading the user data into/out of the corresponding consensus group through a consensus algorithm. The consensus service interface is unified to the outside and supports different consensus algorithm realization, and can provide a more agreeable consensus scheme for application scenes with different consistency requirements.

Description

Consensus service interface for distributed time sequence database
Technical Field
The invention relates to the technical field of computer data management, in particular to a consensus service interface for a distributed time sequence database.
Background
The distributed database is used for realizing data management by redundant copying and storing of data on different computer nodes, and the distributed database utilizes the memory resources of all nodes in the cluster and has the capability of processing mass data; meanwhile, when a few nodes are down or the network is interrupted, services can be continuously provided for the outside, and the fault tolerance of data management is improved. The distributed database maintains consistency among a plurality of copies of the same data through a consensus algorithm, the constraint added on a system read-write link by different consensus algorithms is different, and the consistency level of the copies which are externally represented is also different. That is, the consensus algorithm is critical to distributed database data consistency.
For a distributed time series database, there are different demands on consensus algorithms in different traffic scenarios. For example, applying a monitoring scenario, google's time series database Monarch clearly indicates that it will pay more attention to availability rather than consistency for monitoring time series data, but for key meta-information such as data partitioning, it is still managed inside Monarch using Spanner database that ensures external consistency. However, the current consensus algorithms all realize interfaces independently, which inevitably leads to that other modules in the database need to perform a large amount of special processing to adapt to the consensus algorithm, which negatively affects the iteration efficiency and maintenance cost of the database, and also will challenge future access of the database to new consensus algorithms.
Therefore, the implementation of the existing consensus algorithm is difficult to cope with different business scenarios.
Disclosure of Invention
In order to solve the problems, the invention provides a consensus service interface for a distributed time sequence database, which is unified to the outside and supports different consensus algorithm realization, and can provide a more compatible consensus scheme for application scenes with different consistency requirements.
In a first aspect, the present invention provides a consensus service interface for a distributed timing database that manages a piece of user data using a consensus group, the consensus service interface being deployed on each distributed node of the distributed timing database; the consensus service interface includes: creating an interface, adding and deleting a consensus group interface and a read-write interface;
The creation interface is used for accessing a consensus algorithm appointed by an upper layer; wherein the upper layer is an application layer of the distributed time sequence database;
The add-delete consensus group interface is used for responding to an upper-layer consensus group add/delete request to locally create/delete a state machine interface corresponding to the corresponding consensus group;
The read-write interface is used for responding to the user data writing request of the upper layer consensus group so as to write corresponding user data into each state machine interface corresponding to the corresponding consensus group by utilizing the consensus algorithm; the method is also used for responding to a user data reading request of a user side and reading corresponding user data;
Wherein a consensus group has a corresponding state machine interface at each distributed node it contains.
According to the consensus service interface for the distributed time sequence database, the adding and deleting consensus group interface is further used for: based on the consensus group add/delete request, a local consensus group list is updated.
The invention provides a consensus service interface for a distributed time sequence database, wherein the read-write interface comprises a write-in interface;
The writing interface is used for responding to a request for writing user data of an upper consensus group, writing corresponding user data into a state machine interface corresponding to the corresponding consensus group locally, and writing corresponding user data into state machine interfaces corresponding to other distributed nodes of the corresponding consensus group based on the local consensus group list and the consensus algorithm.
The invention provides a consensus service interface for a distributed time sequence database, wherein the read-write interface also comprises a read-out interface;
The reading interface is used for responding to a user data reading request of a user side, searching and managing any distributed node in a consensus group of corresponding user data in the lookup table; the user data reading request is further used for routing the user data reading request to any distributed node so as to realize the reading of corresponding user data;
Wherein the lookup table records correspondence between user data, consensus groups and distributed nodes.
The invention provides a consensus service interface for a distributed time sequence database, wherein the read-write interface also comprises a read-out interface;
The reading interface is used for responding to a user data reading request of a user side and calculating and managing a consensus group of corresponding user data in a consistent hash calculation mode; and the user data reading request is routed to any distributed node in the calculated consensus group based on the corresponding relation table between the consensus group and the distributed nodes of the background synchronization of all the distributed nodes in the distributed time sequence database so as to realize the reading of the corresponding user data.
According to the consensus service interface for the distributed time sequence database provided by the invention, the consensus service interface further comprises: adding and deleting duplicate interfaces;
The add-delete duplicate interface is configured to respond to an upper layer consensus group member add/delete request, so as to implement local consensus group list change of all distributed nodes related before and after adding/deleting of the corresponding consensus group member and writing/deleting of user data of the corresponding consensus group at the distributed node to be added/deleted by using the consensus algorithm.
According to the consensus service interface for the distributed time sequence database provided by the invention, the consensus service interface further comprises: starting and stopping an interface;
The start-stop interface is used for starting and stopping the local consensus layer RPC service under the instruction of the upper layer.
According to the consensus service interface for the distributed time sequence database, the user data writing request of the upper consensus group is only sent to one distributed node of the corresponding consensus group.
According to the consensus service interface for the distributed time sequence database, before user data of the corresponding consensus group at the distributed node to be added is written, the state machine interface corresponding to the corresponding consensus group is locally created by the adding and deleting consensus group interface of the distributed node to be added under the request of the upper layer.
According to the consensus service interface for the distributed time sequence database, after user data of the corresponding consensus group at the distributed node to be deleted is deleted, the state machine interface corresponding to the corresponding consensus group is deleted locally by the adding-deleting consensus group interface of the distributed node to be deleted under the request of the upper layer.
The invention provides a consensus service interface for a distributed time sequence database, which is deployed on each distributed node of the distributed time sequence database; the method mainly comprises the steps of creating an interface, adding and deleting a consensus group interface and a read-write interface; the method comprises the steps of establishing an interface for accessing a consensus algorithm appointed by an upper layer; the adding and deleting consensus group interface is used for creating/deleting the consensus group for managing the user data, and the reading and writing interface is used for writing/reading the user data into/out of the corresponding consensus group through a consensus algorithm. The consensus service interface is unified to the outside and supports different consensus algorithm realization, and can provide a more agreeable consensus scheme for application scenes with different consistency requirements.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a partial schematic view of a distributed storage framework provided by the present invention;
FIG. 2 is a schematic diagram of a process for creating a consensus group provided by the present invention;
FIG. 3 is a schematic diagram of a process for deleting a consensus group provided by the present invention;
FIG. 4 is a schematic diagram of a process for adding members of a consensus group provided by the present invention;
fig. 5 is a schematic diagram of a process for deleting consensus group members provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The consensus service interface for a distributed time series database of the present invention is described below in connection with fig. 1-5.
The invention provides a consensus service interface for a distributed time sequence database, which is deployed on each distributed node of the distributed time sequence database; each distributed node of the distributed time sequence database and the consensus service interface deployed on the distributed node together form a distributed storage framework of the distributed time sequence database, and the framework comprises the following basic concepts:
Distributed node: a process can be a physical node or a container runtime environment.
Consensus group: managing a user data corresponding to a slice of the distributed time sequence database slicing technology; a consensus group is composed of a plurality of distributed nodes;
and (5) copy: the logic unit of the corresponding consensus group on one distributed node, namely each distributed node of the consensus group stores one user data copy, k user data copies are shared on k distributed nodes, and k is the total number of the distributed nodes in the consensus group;
State machine interface: the system is a structure for managing the bottom storage engine data by a consensus group, is used for maintaining a local user data copy, and each state machine interface is owned by one user data copy and exists on one distributed node;
consensus service interface: the consensus layer manages all different copies of user data in the affiliated nodes by providing service entry to the outside.
FIG. 1 illustrates a partial schematic diagram of a distributed storage framework corresponding to a 4-node 3-copy cluster that maintains two pieces of user data, specifically:
1) The whole cluster has 4 nodes, and each node has 1 consensus service interface.
2) The entire cluster has 2 consensus groups. Consensus group 1 exists on node 1, node 2 and node 3, and consensus group 2 exists on node 2, node 3 and node 4.
3) The consensus service interface of node 1 manages copy 1 of consensus group 1, corresponding to 1 state machine interface, with half of the user data of the cluster.
The consensus service interface of node 2 manages copy 2 of consensus group 1 and copy 1 of consensus group 2, corresponding to 2 state machine interfaces, with all user data of the cluster.
The consensus service interface of node 3 manages copy 3 of consensus group 1 and copy 2 of consensus group 2, corresponding to 2 state machine interfaces, with all user data of the cluster.
The consensus service interface of node 4 manages copy 3 of consensus group 2, corresponding to 1 state machine interface, with half of the user data of the cluster.
Further, the consensus service interface includes: creating an interface, adding and deleting a consensus group interface and a read-write interface;
the creation interface is used for accessing a consensus algorithm appointed by an upper layer;
That is, the upper layer designates the consensus algorithm according to the requirement of the consistency level and the performance of the consensus algorithm of the application scene, and the creation interface accesses the consensus algorithm designated by the upper layer (that is, the creation interface supports the realization of the consensus algorithm with different consistency levels and performances), so that the distributed node realizes the consensus according to the consensus algorithm.
The consistency level and performance are exemplified by: single copy strong consistency requirement consensus algorithm: simpleConsensus;
Multiple copies strong consistency requirement consensus algorithm: ratisConsensus;
Multi-copy weak consistency requirement consensus algorithm: ioTConsensus.
The add-delete consensus group interface is used for responding to an upper-layer consensus group add/delete request to locally create/delete a state machine interface corresponding to the corresponding consensus group;
Correspondingly, the adding and deleting consensus group interface is mainly called when the upper layer creates a consensus group and deletes the consensus group, and is friendly to multiple consensus groups;
FIG. 2 is a schematic diagram of a process for creating a consensus group, wherein when the consensus group is created, an upper layer synchronously sends a corresponding consensus group creation request to an add-drop consensus group interface of a distributed node in the corresponding consensus group, as shown in FIG. 2; the adding and deleting common-identification interfaces of the distributed nodes of the corresponding common-identification groups respond to the creation request, create the corresponding state machine interfaces of the corresponding common-identification groups locally, and return a successful creation response after that. Similarly, fig. 3 is a schematic diagram of a process of deleting a consensus group, where when deleting a consensus group, as shown in fig. 3, an upper layer synchronously sends a corresponding consensus group deletion request to an add-delete consensus group interface of a distributed node in the corresponding consensus group; the adding and deleting common identification group interfaces of the distributed nodes of the corresponding common identification groups respond to the creation request, delete the state machine interfaces corresponding to the local corresponding common identification groups, and return a deletion success response afterwards.
The read-write interface is used for responding to the user data writing request of the upper layer consensus group so as to write corresponding user data into each state machine interface corresponding to the corresponding consensus group by utilizing the consensus algorithm; and is further configured to read corresponding user data in response to a user data read request from the user side.
That is, the read-write interface may route read-write requests into the corresponding consensus group.
Here, the upper layer is an application layer of the distributed time sequence database, and more specifically refers to a developer of the application layer.
Specifically, the adding and deleting consensus group interface is further configured to: based on the consensus group add/delete request, a local consensus group list is updated.
The distributed node adding and deleting consensus group interface also updates a local storage consensus group list so as to realize copy synchronization in the same consensus group by using a consensus algorithm.
Specifically, the read-write interface comprises a write-in interface;
The writing interface is used for responding to a request for writing user data of an upper consensus group, writing corresponding user data into a state machine interface corresponding to the corresponding consensus group locally, and writing corresponding user data into state machine interfaces corresponding to other distributed nodes of the corresponding consensus group based on the local consensus group list and the consensus algorithm.
It can be seen that the upper layer of the present invention only sends the request for writing the user data of the consensus group to any one of the distributed nodes in the consensus group, and the data written by any one of the distributed nodes is synchronized to other distributed nodes in the consensus group by using the consensus algorithm.
The upper layer of the distributed storage frame can establish the corresponding consensus group of each piece of user data according to hardware resources, and then the user data can be written into the corresponding consensus group through the writing interface of the consensus group node.
Specifically, the read-write interface further comprises a read-out interface;
The reading interface is used for responding to a user data reading request of a user side, searching and managing any distributed node in a consensus group of corresponding user data in the lookup table; the user data reading request is further used for routing the user data reading request to any distributed node so as to realize the reading of corresponding user data;
Wherein the lookup table records correspondence between user data, consensus groups and distributed nodes.
Or the reading interface is used for responding to a user data reading request of a user side and calculating and managing a consensus group of corresponding user data in a consistent hash calculation mode; and the user data reading request is routed to any distributed node in the calculated consensus group based on the corresponding relation table between the consensus group and the distributed nodes of the background synchronization of all the distributed nodes in the distributed time sequence database so as to realize the reading of the corresponding user data.
That is, the present invention provides two readout interface implementations, look-up tables and consistent hashes, respectively. For clusters using a look-up table approach, the upper layer has a storage node that stores the look-up table. After each distributed node receives the user data reading request, the distributed node searches a lookup table according to a field carried by the request to find out which common identification group the request should be routed to, and a node list corresponding to the common identification group, and finally routes the user data reading request to a corresponding node to read the user data. For a cluster using a consistent hash mode, when each distributed node receives a user data reading request, hash calculation can be directly carried out on a field carried by the request to obtain a corresponding consensus group of the request, then a node list corresponding to the consensus group is obtained through the corresponding relation between the consensus group always synchronized in the background in the cluster and the node, and finally the request is forwarded to the corresponding node to read the user data.
Specifically, the consensus service interface further includes: adding and deleting duplicate interfaces;
The add-delete duplicate interface is configured to respond to an upper layer consensus group member add/delete request, so as to implement local consensus group list change of all distributed nodes related before and after adding/deleting of the corresponding consensus group member and writing/deleting of user data of the corresponding consensus group at the distributed node to be added/deleted by using the consensus algorithm.
That is, the add-drop copy interface is a single consensus group management interface for supporting member changes of a single consensus group.
Correspondingly, fig. 4 is a schematic diagram of a process of adding members of a consensus group, as shown in fig. 4, an upper layer may send a creation request about the consensus group to an add-delete consensus group interface of a distributed node to be added of the consensus group, so that a state machine interface corresponding to the consensus group is created at the distributed node to be added locally; and then, the upper layer can send a member new adding request to any normal node (non-added distributed node) of the consensus group, so that the any normal node realizes local consensus group list change of the related distributed node and user data writing of the consensus group at the distributed node to be added based on the consensus algorithm.
Similarly, fig. 5 is a schematic diagram of a process for deleting members of a consensus group, and as shown in fig. 5, an upper layer sends a member deletion request about the consensus group to any normal node (non-to-be-deleted distributed node) of the consensus group, so that the any normal node realizes a local consensus group list change of a related distributed node and user data deletion of the consensus group at the to-be-deleted distributed node based on a consensus algorithm. And then, the upper layer can send a deletion request about the consensus group to a deletion-adding consensus group interface of the distributed node to be deleted in the consensus group so that the distributed node to be deleted deletes a state machine interface corresponding to the local consensus group.
Specifically, the consensus service interface further includes: starting and stopping an interface;
The start-stop interface is used for starting and stopping the local consensus layer RPC service under the instruction of the upper layer.
That is, when a cluster is needed/not needed to provide a service, a start-stop interface (start function/stop function) in the consensus service interface may be invoked to start/stop all resources.
In summary, the consensus service interface of the present invention has the following points:
(1) Decoupling the interface from the implementation, hiding complex details of the consensus algorithm, and leaving the possibility of adding more consensus algorithm implementations in the future;
(2) The method supports multiple consensus groups, is friendly to multiple fragments, is convenient for upper layers to uniformly distribute data, and reduces the occurrence of uneven utilization of storage and calculation resources;
(3) And the change of the common-knowledge group members is supported, so that the upper layer can perform load balancing by migrating data according to the resource utilization conditions on different common-knowledge group nodes.
In the invention, a developer designates a consensus algorithm, a dominant addition and deletion consensus group and member change of the consensus group, and a distributed storage framework is only realized for each external interface, on one hand, the interfaces such as the addition and deletion consensus group interface, the addition and deletion copy interface and the like are irrelevant to the type of the consensus algorithm, thereby shielding the complex details of the consensus algorithm to the maximum extent for the developer and leaving the possibility of realizing more consensus algorithms for the future; on the other hand, provides sufficient flexibility for the user data management area to achieve high scalability.
Assuming that the upper layer plan builds a 3-copy cluster with 4 nodes, and the whole cluster has 2 consensus groups, the building flow is as follows:
1. first, a consensus algorithm type is specified for 4 nodes and a start function in a start-stop interface of the 4 nodes is called.
2. Then 23 copies of the consensus group are created, with the consensus group ID being 1,2, respectively. Consensus group 1 exists on node 1, node 2 and node 3, and consensus group 2 exists on node 2, node 3 and node 4.
3. And adding the consensus group 1 to the node 1, adding the consensus groups 1 and 2 to the node 2 and the node 3, and adding the consensus group 2 to the node 4 through the call of the consensus group adding and deleting interface.
4. User data of the common identification group 1 is written, consistency among multiple nodes (node 1, node 2 and node 3) is guaranteed through a configurable common identification algorithm, and high-availability redundant storage is further achieved. Consensus group 2 is the same.
5. When inquiring the user data of the consensus group 1, the read request is routed to one distributed node of the consensus group 1, and the data is read according to a corresponding consistency policy, for example, the strong consistency policy needs to ensure that the local data can be read only by ensuring that the node is a leader after communicating with other nodes, and the weak consistency policy can directly read the local data.
6. If one copy is to be added to consensus 1, i.e., node 4 joins consensus 1, then a member change of a single consensus is performed on any one of nodes 1,2 or 3, and the number of copies of consensus 1 becomes 4, i.e., all nodes will have the data.
7. When the user does not need the cluster to provide service again, the cluster can be shut down, namely, a stop function of the consensus service interface is called to shut down all resources.
The developer of the invention wants to select different consensus algorithms according to the requirements of service scenes, only the consensus algorithm with corresponding consistency level is required to be designated when the first step starts the consensus service framework, the following adding and deleting of the consensus group, the member change of the consensus group, the reading and writing of the consensus group data and the like are irrelevant to the type of the consensus algorithm, thereby being capable of shielding the complex details of the consensus algorithm to the maximum extent for the developer and leaving the possibility of adding more consensus algorithms in future for realization.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A consensus service interface for a distributed timing database, wherein the distributed timing database manages a share of user data using a consensus group, the consensus service interface being deployed on each distributed node of the distributed timing database; the consensus service interface includes: creating an interface, adding and deleting a consensus group interface and a read-write interface;
The creation interface is used for accessing a consensus algorithm appointed by an upper layer; wherein the upper layer is an application layer of the distributed time sequence database;
The add-delete consensus group interface is used for responding to an upper-layer consensus group add/delete request to locally create/delete a state machine interface corresponding to the corresponding consensus group;
The read-write interface is used for responding to the user data writing request of the upper layer consensus group so as to write corresponding user data into each state machine interface corresponding to the corresponding consensus group by utilizing the consensus algorithm; the method is also used for responding to a user data reading request of a user side and reading corresponding user data;
Wherein a consensus group has a corresponding state machine interface at each distributed node it contains;
the read-write interface also comprises a read-out interface;
The reading interface is used for responding to a user data reading request of a user side and calculating and managing a consensus group of corresponding user data in a consistent hash calculation mode; and the user data reading request is routed to any distributed node in the calculated consensus group based on the corresponding relation table between the consensus group and the distributed nodes of the background synchronization of all the distributed nodes in the distributed time sequence database so as to realize the reading of the corresponding user data.
2. The consensus service interface for a distributed timing database according to claim 1, wherein the add-drop consensus group interface is further configured to: based on the consensus group add/delete request, a local consensus group list is updated.
3. The consensus service interface for a distributed timing database according to claim 2, wherein the read-write interface comprises a write interface;
The writing interface is used for responding to a request for writing user data of an upper consensus group, writing corresponding user data into a state machine interface corresponding to the corresponding consensus group locally, and writing corresponding user data into state machine interfaces corresponding to other distributed nodes of the corresponding consensus group based on the local consensus group list and the consensus algorithm.
4. A consensus service interface for a distributed timing database according to claim 3, wherein the read-write interface further comprises a read-out interface;
The reading interface is used for responding to a user data reading request of a user side, searching and managing any distributed node in a consensus group of corresponding user data in the lookup table; the user data reading request is further used for routing the user data reading request to any distributed node so as to realize the reading of corresponding user data;
Wherein the lookup table records correspondence between user data, consensus groups and distributed nodes.
5. The consensus service interface for a distributed timing database according to claim 2, further comprising: adding and deleting duplicate interfaces;
The add-delete duplicate interface is configured to respond to an upper layer consensus group member add/delete request, so as to implement local consensus group list change of all distributed nodes related before and after adding/deleting of the corresponding consensus group member and writing/deleting of user data of the corresponding consensus group at the distributed node to be added/deleted by using the consensus algorithm.
6. The consensus service interface for a distributed timing database according to claim 1, further comprising: starting and stopping an interface;
The start-stop interface is used for starting and stopping the local consensus layer RPC service under the instruction of the upper layer.
7. The consensus service interface for a distributed timing database according to claim 1, wherein a higher layer consensus group user data write request is sent to only one distributed node of a corresponding consensus group.
8. The consensus service interface for a distributed timing database according to claim 5, wherein the add-drop consensus interfaces of the distributed nodes to be added create state machine interfaces corresponding to the respective consensus groups locally at the request of the upper layer before user data writing of the respective consensus groups at the distributed nodes to be added is achieved.
9. The consensus service interface for a distributed timing database according to claim 5, wherein after user data deletion of a corresponding consensus group at the distributed node to be deleted is implemented, the add-delete consensus group interface of the distributed node to be deleted locally deletes a state machine interface corresponding to the corresponding consensus group at the request of the upper layer.
CN202310503870.XA 2023-05-06 2023-05-06 Consensus service interface for distributed time sequence database Active CN116737810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310503870.XA CN116737810B (en) 2023-05-06 2023-05-06 Consensus service interface for distributed time sequence database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310503870.XA CN116737810B (en) 2023-05-06 2023-05-06 Consensus service interface for distributed time sequence database

Publications (2)

Publication Number Publication Date
CN116737810A CN116737810A (en) 2023-09-12
CN116737810B true CN116737810B (en) 2024-06-25

Family

ID=87908738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310503870.XA Active CN116737810B (en) 2023-05-06 2023-05-06 Consensus service interface for distributed time sequence database

Country Status (1)

Country Link
CN (1) CN116737810B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282457B1 (en) * 2016-02-04 2019-05-07 Amazon Technologies, Inc. Distributed transactions across multiple consensus groups

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402115B2 (en) * 2016-11-29 2019-09-03 Sap, Se State machine abstraction for log-based consensus protocols
CN109964446B (en) * 2018-06-08 2022-03-25 北京大学深圳研究生院 Consensus method based on voting
CN109582734A (en) * 2018-10-26 2019-04-05 西安居正知识产权运营管理有限公司 The consistency solution of distributed data base
CN110998580A (en) * 2019-04-29 2020-04-10 阿里巴巴集团控股有限公司 Method and apparatus for confirming transaction validity in blockchain system
US11614769B2 (en) * 2019-07-15 2023-03-28 Ecole Polytechnique Federale De Lausanne (Epfl) Asynchronous distributed coordination and consensus with threshold logical clocks
CN110517141B (en) * 2019-08-27 2023-06-13 深圳前海微众银行股份有限公司 Consensus method and device based on block chain system
CN111049902B (en) * 2019-09-16 2021-08-13 腾讯科技(深圳)有限公司 Data storage method, device, storage medium and equipment based on block chain network
CN111538785A (en) * 2020-04-23 2020-08-14 北京海益同展信息科技有限公司 Data writing method, device and system of block chain and electronic equipment
CN111858759B (en) * 2020-07-08 2021-06-11 平凯星辰(北京)科技有限公司 HTAP database system based on consensus algorithm
CN111858097A (en) * 2020-07-22 2020-10-30 安徽华典大数据科技有限公司 Distributed database system and database access method
CN112286889B (en) * 2020-09-22 2022-07-26 北京航空航天大学 Wide area network-oriented metadata copy synchronization method for distributed file system
EP4002786B1 (en) * 2020-11-11 2023-06-21 Deutsche Post AG Distributed ledger system
CN112597241A (en) * 2020-12-10 2021-04-02 浙江大学 Block chain-based distributed database storage method and system
CN112527647B (en) * 2020-12-15 2022-06-14 浙江大学 NS-3-based Raft consensus algorithm test system
CN112905615B (en) * 2021-03-02 2023-03-24 浪潮云信息技术股份公司 Distributed consistency protocol submission method and system based on sequence verification
CN114296831A (en) * 2021-12-30 2022-04-08 迅鳐成都科技有限公司 Dynamic loading method, device and system for block chain consensus algorithm and storage medium
CN114584577B (en) * 2022-03-08 2023-12-19 昆明理工大学 Block chain slicing asynchronous consensus method and system for processing data
CN115051985B (en) * 2022-04-01 2024-01-12 深圳瑞泰信资讯有限公司 Data consensus method of Bayesian-preemption fault-tolerant consensus protocol based on dynamic nodes
CN114862397B (en) * 2022-07-06 2022-09-30 国网天津市电力公司培训中心 Double-decoupling block chain distributed method based on double-chain structure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10282457B1 (en) * 2016-02-04 2019-05-07 Amazon Technologies, Inc. Distributed transactions across multiple consensus groups

Also Published As

Publication number Publication date
CN116737810A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
US11360854B2 (en) Storage cluster configuration change method, storage cluster, and computer system
CN110213352B (en) Method for aggregating dispersed autonomous storage resources with uniform name space
CN103067433B (en) A kind of data migration method of distributed memory system, equipment and system
US9304815B1 (en) Dynamic replica failure detection and healing
US9395933B2 (en) Distributed storage system, distributed storage method, and program and storage node for distributed storage
JP7270755B2 (en) Metadata routing in distributed systems
US20100023564A1 (en) Synchronous replication for fault tolerance
CN102158540A (en) System and method for realizing distributed database
US20100318584A1 (en) Distributed Cache Availability During Garbage Collection
CN109933312B (en) Method for effectively reducing I/O consumption of containerized relational database
CN110825704B (en) Data reading method, data writing method and server
CN108319623A (en) A kind of fast resampling method, apparatus and data-base cluster
CN111966482B (en) Edge computing system
US20240160528A1 (en) Data Storage Method and Apparatus in Storage System
CN107943615B (en) Data processing method and system based on distributed cluster
JP2011232840A (en) Access control information managing method, computer system and program
US9037762B2 (en) Balancing data distribution in a fault-tolerant storage system based on the movements of the replicated copies of data
CN116737810B (en) Consensus service interface for distributed time sequence database
CN115037757B (en) Multi-cluster service management system
CN116389233A (en) Container cloud management platform active-standby switching system, method and device and computer equipment
CN116561217A (en) Metadata management system and method
Dongsheng et al. Distributed cache memory data migration strategy based on cloud computing
WO2022220830A1 (en) Geographically dispersed hybrid cloud cluster
CN115687250A (en) Storage method, equipment, system and computer storage medium
EP4081888A1 (en) Dynamic adaptive partition splitting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant