CN106844399B - Distributed database system and self-adaptive method thereof - Google Patents

Distributed database system and self-adaptive method thereof Download PDF

Info

Publication number
CN106844399B
CN106844399B CN201510890348.7A CN201510890348A CN106844399B CN 106844399 B CN106844399 B CN 106844399B CN 201510890348 A CN201510890348 A CN 201510890348A CN 106844399 B CN106844399 B CN 106844399B
Authority
CN
China
Prior art keywords
data
node
copy
nodes
fragment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510890348.7A
Other languages
Chinese (zh)
Other versions
CN106844399A (en
Inventor
郑国斌
肖旸
章恩华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201510890348.7A priority Critical patent/CN106844399B/en
Priority to PCT/CN2016/103964 priority patent/WO2017097059A1/en
Publication of CN106844399A publication Critical patent/CN106844399A/en
Application granted granted Critical
Publication of CN106844399B publication Critical patent/CN106844399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases

Abstract

The invention discloses a distributed database system and a self-adaptive method thereof, wherein the system comprises a control node, a client API and a data node, the control node is used for managing the data node of the system, and the data of the computing system is routed and broadcasted to the client API and the data node; the client API is used for providing a read/write data interface for a data visitor and forwarding the received data operation request to a corresponding data node according to a locally cached data route; the data node is used for storing the data fragments and processing the received data operation request according to the data route of the local cache. The invention makes the data access path shorter and the efficiency higher; the data nodes are not divided into main nodes and standby nodes, so that the system load is more balanced; the data migration process is smoother and more uniform.

Description

Distributed database system and self-adaptive method thereof
Technical Field
The invention relates to the field of databases, in particular to a distributed database system and a self-adaptive method thereof.
Background
The distributed database is a database cluster system generally formed by a plurality of data nodes with the functions of calculation, storage and network communication, has the advantages of high performance and high reliability, and is widely used in the industries of telecommunication, banks, internet and the like; the existing distributed database consists of data access agent nodes and data storage nodes, wherein the data storage nodes are divided into a plurality of data storage clusters according to data keywords, each data storage cluster is provided with 1 data storage main node and a plurality of data storage standby nodes, the main node provides read-write data service, the standby nodes only provide read-write data service, and data written by the main node can be copied to the standby nodes; the data access agent node is responsible for acting on the data operation request of the data accessor and forwarding the data operation request to the corresponding data storage node of the corresponding data storage cluster for processing; the distributed database has more data nodes and the data nodes are interdependent, which brings the following problems:
1. access inefficiency
The existing distributed database has special data access proxy nodes, so that the data access path of a data visitor is prolonged, and the processing efficiency of the data visitor is reduced;
2. data capacity and load imbalance among nodes
The data storage nodes are divided into main nodes and standby nodes, so that when the data writing frequency is high, data can only be written into the main nodes, the load of the main nodes is heavy, and the performance bottleneck is easy to achieve; when a certain data node fails, the data on the data node can be shared and processed only by a single or partial data nodes (standby nodes), so that the load imbalance among the nodes is aggravated;
3. the data distribution is difficult to adjust, and the data is difficult to smoothly migrate
Once the data nodes are increased or decreased, particularly in a virtualization environment, the elastic expansion and contraction of the data nodes are normal, the distribution of data on the data nodes needs to be adjusted frequently, the process of adjusting the distribution of the data or the distribution of the data needs to be adjusted by manually executing a command or restarting is long, and great risks are brought to the stable operation and the service quality of a distributed database;
4. state maintenance complexity
The main and standby one-way copying is adopted between the main and standby data storage nodes, the main node fails, a new main node needs to be elected again, and the system state maintenance is complex;
for the above problems of distributed databases, the industry usually processes the following problems: dividing data into a plurality of fragments according to the range or the HASH value of a data keyword, and uniformly distributing the fragments to data nodes according to a consistent HASH algorithm, wherein the uniformity among the nodes is not considered for the copy (backup) distribution of each fragment; the above distribution mode based on the consistent HASH brings new problems, that is, when nodes are increased or decreased, sometimes, the number of adjusted fragments is small, sometimes, the number of adjusted fragments is large, the adjustment of data fragments among nodes is unpredictable, and the number of migrated data fragments is uncontrollable.
Disclosure of Invention
The embodiment of the invention provides a distributed database system and a self-adaptive method thereof, which are used for solving the problems of unbalanced load among nodes, difficulty in adjustment of data distribution, unsmooth data migration and complex maintenance in the conventional distributed database system.
The invention discloses a distributed database system, which comprises a control node, a client API and a data node
The control node is used for managing the data nodes of the system, calculating the data route of the system and broadcasting the data route to the client API and the data nodes;
the client API is used for providing a read/write data interface for a data visitor and forwarding the received data operation request to a corresponding data node according to a locally cached data route;
and the data node is used for storing the data fragments and processing the received data operation request according to the data route of the local cache.
Preferably, the data nodes are deployed in the system in a manner of virtual machines or computing storage hosts.
Preferably, the client AP is operated by a data accessor in a dynamic library or plug-in manner.
Preferably, the control node is configured to monitor the number and state changes of the data nodes in the system in real time, and execute node capacity expansion/capacity reduction operation when the number of the data nodes changes; and when the state of the data node changes, updating the state of the corresponding data node in the data route and broadcasting the updated data route.
Preferably, the client API is configured to calculate, according to a data keyword in the received data operation request, a data fragment corresponding to the requested data, and search a data node where each data fragment is located in a data route cached locally; and forwarding the data operation request to a corresponding data node according to a data node selection rule of the local cache.
Preferably, the data node is configured to, after receiving the data operation request, search, in a data route of a local cache, whether a data fragment in the data operation request is stored in the data node; when the data fragment is not stored in the data node, searching the data node where the data fragment is located in a data route of a local cache, and forwarding the data operation request to the found data node; and when the data fragments are stored in the data node, executing the data operation request and returning a data operation response to the data visitor.
Preferably, the data node is configured to report a self-state to the control node periodically; when the link changes, reporting the self state to the control node in real time;
the control node is used for periodically updating the data route.
Preferably, the data node is configured to perform a data recovery operation and a data copy operation;
the control node is configured to perform domain division on the data node according to a preset domain division rule.
The invention further discloses a self-adaptive method of the distributed database system, which executes the following steps after the system is powered on:
controlling the data routing of the node computing system and broadcasting the data routing to the client API and all the data nodes;
a client API receives a data operation request of an accessor, and forwards the request to a corresponding data node according to a data route of a local cache;
and the data node processes the received data operation request and returns a data operation response to the visitor.
Preferably, before the data routing of the computing system, the control node further performs the following steps:
and carrying out domain division on the data nodes according to a preset domain division rule.
Preferably, the domain division rule is: if the number of hosts/servers to which the data node belongs is 1, dividing the data node into a left domain or a right domain; if the number of the hosts/servers to which the data nodes belong is more than or equal to 2, dividing the data nodes into a left domain and a right domain according to the principle of uniform distribution of the hosts/servers to which the data nodes belong, and enabling the data nodes belonging to the same host/server to be located in the same domain.
Preferably, the control node calculates the number of data fragments to be distributed on each data node according to the number of data nodes and the number of data fragments of the system, and generates a data route.
Preferably, the step of forwarding the request to the corresponding data node by the client API according to the locally cached data route specifically includes:
calculating corresponding data fragments according to the data keywords in the data operation request;
searching a data node corresponding to each data fragment in a data route of a local cache;
and respectively forwarding the data operation requests to the found data nodes according to a preset data node selection rule.
Preferably, the data node selection rule is as follows:
when the number of the data nodes corresponding to the searched data fragments is 1, directly forwarding the data operation request to the data nodes;
when the number of data nodes corresponding to the searched data fragment is greater than 1, judging the type of the data operation request, if the type is write operation, checking the copy number of the data fragment in each data node and the state of the data node, and sending the data operation request to the data node with a normal state and a small copy number; and if the data operation request is a read operation, sending the data operation request to the data node with the minimum load.
Preferably, the data node processes the received data operation request by the following method:
searching whether the data fragments in the data operation request are stored in the data node or not in the data route of the local cache; if yes, executing the data operation request, and returning a data operation response to the data visitor; otherwise, searching the data node where the data fragment is located in the data route of the local cache, and forwarding the data operation request to the found data node.
Preferably, the execution data operation request specifically includes:
when the data operation request is write operation, adding, modifying or deleting the copy of the data fragment stored in the local according to the operation mode of the visitor;
and when the data operation request is a read operation, reading data from the local copy stored in the data fragment.
Preferably, when the data operation request is a write operation, after the data operation request is processed, a data copy process is executed, specifically:
recording data changed by the data fragments or full data;
and searching data nodes where the rest copies of the data fragments are located in the data route of the local cache, and copying the data or the full data changed by the data fragments to the data nodes where the rest copies of the data fragments are located.
Preferably, the control node further performs the following steps during the operation of the system:
whether data nodes are newly added or deleted in the real-time monitoring system or not is judged, and if the data nodes are newly added, node expansion operation is executed; and if the data node is deleted, executing the node capacity reduction operation.
Preferably, the node capacity expansion operation specifically includes the following steps:
calculating a first copy data fragment list and a second copy data fragment list to be migrated to the newly added data node;
distributing a third copy for the data fragment to be migrated on the newly added data node, recalculating the data route of the system and broadcasting;
waiting for the newly added data node to recover the data;
receiving the self state reported by the newly added data node, recalculating the data route of the system according to a preset capacity expansion rule and broadcasting;
informing all data nodes to delete the third copies of all local data fragments;
and after the deletion of all the data nodes is confirmed to be completed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Preferably, the step of calculating the first replica data fragment list and the second replica data fragment list to be migrated to the newly added data node specifically includes:
dividing the total number of the data fragments by the total number of the data nodes including the newly added data nodes, and calculating the average number of the data fragments to be stored by each data node;
subtracting the calculated average data fragment number from the current data fragment number of each data node, and calculating the data fragment number to be transferred from each original data node to the newly added data node;
and the first copies of all the data fragments to be migrated from the original data nodes form a first copy data fragment list of the newly added data nodes, and the second copies of all the data fragments to be migrated from the original data nodes form a second copy data fragment list of the newly added data nodes.
Preferably, the preset capacity expansion rule is as follows:
informing the original data node to switch the first copy of the data fragment to be locally migrated to the newly added data node into the third copy; meanwhile, the newly added data node is informed to switch the third copy of the corresponding data fragment into the first copy;
informing the original data node to switch the second copy of the data fragment to be locally migrated to the newly added data node into a third copy; and simultaneously informing the newly added data node to switch the third copy of the corresponding data fragment into the second copy.
Preferably, the node capacity reduction operation specifically includes the following steps:
calculating a first copy data fragment list and a second copy data fragment list on each remaining node;
distributing a third copy for the data fragment to be migrated on the rest data nodes, recalculating the data route of the system and broadcasting;
waiting for the other data nodes to recover the data;
waiting for other data nodes to copy data;
receiving self states reported by other data nodes, recalculating the data route of the system and broadcasting according to a preset capacity reduction rule;
informing all data nodes to delete the third copies of all local data fragments;
and after the deletion of all the data nodes is confirmed to be completed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Preferably, the step of calculating the first replica data fragment list and the second replica data fragment list on each remaining node specifically includes:
dividing the total number of the data fragments by the number of the remaining data nodes, and calculating the average number of the data fragments to be stored by each data node in the remaining data nodes;
subtracting the current data fragment number on each residual data node from the average data fragment number, and calculating the data fragment number to be migrated from the node to be closed on each residual data node;
and according to a preset data fragment distribution principle, distributing the first copy and the second copy of the data fragment on the data node to be deleted to the remaining data nodes to obtain a first copy data fragment list and a second copy data fragment list on each remaining node.
Preferably, the preset capacity reduction rule is as follows:
informing the data node to be deleted to switch the first copy of the data fragment to be migrated into the third copy; simultaneously informing the residual data nodes storing the third copy of the data fragment to switch the third copy of the data fragment into the first copy;
informing the data node to be deleted to switch the second copy of the data fragment to be migrated into the third copy; and simultaneously informing the residual data nodes storing the third copy of the data fragment to switch the third copy of the data fragment into the second copy.
Preferably, the data fragment distribution principle is as follows:
the number of data fragments on each data node is the same as much as possible; and is
The first copy and the second copy of each data fragment are distributed on data nodes of different domains; and
the second copies of all the first replica data fragments on each data node are evenly distributed on all the data nodes in different domains.
Preferably, the data node recovers the data by:
inquiring a local data route, and acquiring a data node where a third copy of the first copy data fragment on the node is located;
copying corresponding data fragments to the data node where the third copy is located;
and after the recovery is finished, reporting the self state to the control node.
Preferably, the added data node is a data node newly added to the system;
the deleted data node includes: the data nodes needing to be deleted because the burden is less than the preset value and the data nodes needing to be deleted because of receiving a user deleting instruction.
Preferably, the client API determines the number of fragments requesting data by taking a HASH value for the data key and then taking a module value of the total number of data fragments for the HASH value.
Compared with the prior art, the invention does not need to pass through a special proxy access node, has shorter data access path and higher efficiency; data fragmentation is stored and managed, data nodes are not divided into a main data node and a standby data node, and multiple copy data of the same fragmentation can be mutually copied, so that loads among nodes of a distributed database are more balanced; the data routing is automatically calculated and distributed, the data migration process is controllable, the data migration is smoother and more uniform, manual intervention is not needed, and access is not interrupted.
Drawings
FIG. 1 is a block diagram of a distributed database system according to the present invention;
FIG. 2 is a flow chart of a preferred embodiment of a distributed database system adaptation method of the present invention;
FIG. 3 is a flow chart of a preferred embodiment of a data node discovery process in the adaptive method for a distributed database system according to the present invention;
FIG. 4 is a flow chart of a preferred embodiment of a data node state management process in the adaptive method of the distributed database system of the present invention;
FIG. 5 is a flow diagram of a preferred embodiment of data replication in a distributed database system adaptation method of the present invention;
FIG. 6 is a flowchart illustrating a preferred embodiment of a node capacity expansion operation in the adaptive method for a distributed database system according to the present invention;
FIG. 7 is a flowchart of a preferred embodiment of a node capacity reduction operation in the adaptive method for a distributed database system according to the present invention;
FIG. 8 is a flow chart of a preferred embodiment of a data node recovery process in the distributed database system adaptation method of the present invention;
in order to make the technical solution of the present invention clearer and clearer, the following detailed description is made with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a block diagram of a distributed database system according to the present invention; the embodiment includes a control node 10, a client API 20, and data nodes 30, and the embodiment includes 4 data nodes 30; wherein the content of the first and second substances,
the control node 10 is used for managing a data node 30 of the system, calculating the data route of the system and broadcasting the data route to the client API 20 and the data node 30; the method specifically comprises the following steps:
periodically updating data routing and broadcasting;
monitoring the number and state change of the data nodes 30 in the system in real time, and executing node capacity expansion/capacity reduction operation when the number of the data nodes 30 in the system changes;
when the state of the data node 30 changes, updating the state of the corresponding data node 30 in the data route and broadcasting the updated data route; and
according to a preset domain division rule, carrying out domain division on the data nodes 30;
the domain division rule is as follows:
if the number of hosts/servers to which the data node belongs is 1, dividing the data node into a left domain or a right domain; if the number of the hosts/servers to which the data node belongs is more than or equal to 2, dividing the data node into a left domain and a right domain according to the principle of uniform distribution of the hosts/servers to which the data node belongs (even if the number of the hosts/servers distributed in the left domain and the right domain is the same as much as possible), and enabling the data node belonging to the same host/server to be located in the same domain.
For example, as shown in fig. 1, 4 data nodes are numbered from left to right in sequence as 1-4; if the 4 data nodes belong to the same 1 host/server, dividing the 4 data nodes into a left domain or a right domain; if 4 data nodes belong to the same 2 hosts/servers, assuming that the data nodes numbered 1 and 2 belong to a first host/server and the data nodes numbered 3 and 4 belong to a second host/server; dividing data nodes 1 and 2 belonging to a first host/server into a left domain, and dividing data nodes 3 and 4 belonging to a second host/server into a right domain, so that each domain has 2 data nodes; or assuming that the data nodes numbered 1, 2 and 3 belong to a first host/server and the data node numbered 4 belongs to a second host/server, dividing the data nodes 1, 2 and 3 belonging to the first host/server into a left domain, and dividing the data node 4 belonging to the second host/server into a right domain, so that the left domain has 3 data nodes; the right domain has 1 data node;
in order to achieve balance of data fragmentation and data reliability, the control node 10 should compute data routes according to the following data fragmentation distribution principle:
the number of data fragments on each data node is the same as much as possible; and is
The first copy and the second copy of each data fragment are distributed on data nodes of different domains; and
the second copies of all the first copy data fragments on each data node are uniformly distributed on all the data nodes in different domains; for example, the current data node is located in the left domain, and there are 10 first copies of the data fragments on the current data node, and according to the distribution principle, the 10 second copies of the data fragments should be uniformly distributed on all the data nodes in the right domain, and assuming that there are 2 data nodes in the right domain, 5 of the 10 second copies of the data fragments are distributed on each data node in the right domain.
As shown in fig. 1, in this embodiment, the distributed database system has 4 data nodes 30, and 16 data fragments coexist and are stored, where first copies of the data fragments are respectively marked with numbers 1 to 16; the second copies are respectively marked with numbers 1 '-16', and each data node 30 stores a first copy of 4 data slices and a second copy of 4 data slices; the data shards in the first copy are completely different from the data shards in the second copy.
The client API 20 is configured to provide an interface for reading/writing data for a data visitor, and send a received data operation request to a corresponding data node 30 according to a locally cached data route; the method comprises the following specific steps:
calculating corresponding data fragments according to data keywords in the received data operation request, and searching a data node 30 where each data fragment is located in a data route of a local cache; the algorithm for calculating the data fragments can be a mode of taking a HASH value of a data keyword and then taking the total number of the data fragments of the HASH value to determine the number of the fragments requesting the data; the data fragment can also be divided according to the prefix and suffix range of the data keyword;
forwarding the data operation request to the corresponding data node 30 according to the data node selection rule of the local cache;
the client API 20 is operated by a data visitor in a dynamic library/plug-in mode;
the data node 30 is deployed in the system in a virtual machine or computer storage host mode, and can be configured to belong to a left domain or a right domain; for:
storing the data fragments;
the data fragmentation refers to that data are segmented into a plurality of fragments according to data keywords, the data of different fragments are different, each data fragment is provided with a first copy, a second copy and a third copy, the third copy is only temporarily used in the process of increasing and decreasing data nodes, the data among the copies are the same, and the copies of the same data fragment are stored on the data nodes in different domains according to the data fragmentation distribution principle;
caching the received data route and processing the received data operation request, wherein the data operation request comprises read and write operations; the method specifically comprises the following steps: after receiving the data operation request, searching whether the data fragment in the data operation request is stored in the data node 30 in the data route of the local cache; when the data fragment is not stored in the data node 30, searching the data node 30 where the data fragment is located in the data route of the local cache, and forwarding the data operation request to the found data node 30; when the data fragment is stored in the data node 30, the data operation request is executed, and a data operation response is returned to a data visitor;
when restarting or data routing changes, executing data recovery operation;
when the data fragment is changed, for example, the content of the data fragment is changed after the write operation is executed, the changed data or the full data is recorded, and the data copy operation is executed; copying the changed data or the full data to other data nodes 30 containing the same data fragment;
periodically reporting the self state to the control node 10; and reporting the self state to the control node 10 in real time when the link changes.
The topology of the distributed database system is hidden for the data accessor, and the decoupling of the distributed database and the data accessor is realized.
FIG. 2 is a flow chart of a preferred embodiment of the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S101: the system is powered on, the control node 10 divides the domain of the data node 30 according to a preset domain division rule, then calculates the data route of the system and broadcasts the data route to the client API 20 and all the data nodes 30;
in this step, according to the number of data nodes 30, the number of data fragments, and a preset route calculation principle of the system, a first copy list and a second copy list of the data fragments that need to be distributed on each data node 30 are calculated, and a data route is generated.
The control node 10 is also responsible for data node discovery and state management in the system operation process, which are respectively shown in fig. 3 and 4;
step S102: after the system initialization is completed, the client API 20 receives a data operation request of an accessor;
step S103: calculating corresponding data fragments according to the data keywords in the data operation request;
the method comprises the following steps of determining the number of fragments requesting data by adopting a mode of taking a HASH value of a data keyword and then taking a module value of the total number of the data fragments from the HASH value; the data fragment can also be divided according to the prefix and suffix range of the data keyword;
step S104: searching a data node 30 corresponding to each data fragment in a data route of a local cache, and respectively forwarding the data operation request to the corresponding data node 30 according to a preset data node selection rule;
the data routing is a corresponding relationship between each data fragment and the data node 30.
The data node selection rule is as follows: when the number of the data nodes 30 corresponding to the searched data fragments is 1, directly forwarding the data operation request to the data nodes 30;
when the number of the data nodes 30 corresponding to the searched data fragment is greater than 1, judging the type of the data operation request, if the data operation request is write operation, checking the copy number of the data fragment in each data node 30 and the state of the data node 30, and sending the data operation request to the data node 30 with a normal state and a small copy number; and if the data operation request is a read operation, sending the data operation request to the data node 30 with the minimum load.
Step S105: searching whether the data fragments in the data operation request are stored in the data node 30 or not in the data route of the local cache in the data operation request received by the data node 30; if yes, go to step S106; otherwise, go to step S107;
the step of checking whether the data fragment of the request data belongs to the node or not by analyzing the data keyword in the data operation request; if so, the data fragment corresponding to the request data is stored in the data node 30, otherwise, the data fragment corresponding to the request data is not stored in the data node 30.
Step S106: executing the data operation request, returning a data operation response to a data visitor, and finishing the current data fragmentation processing;
in this step, the request for executing the data operation specifically includes:
when the data operation request is write operation, adding, modifying or deleting the copy of the data fragment stored locally according to the operation mode of the visitor;
and when the data operation request is a read operation, reading data from the copy of the data fragment stored locally.
In the present invention, when the data operation request is a write operation, after the data operation request is processed, the data replication process shown in fig. 5 is also executed; that is, after the data node 30 modifies the local data, the modified data needs to be copied to the data node 30 where the other copies of the same segment are located.
Step S107: and searching the data node 30 where the data fragment is located in the data route of the local cache, and forwarding the data operation request to a corresponding data node which is normally communicated with the data node according to a preset data node selection rule.
That is, if the data fragment corresponding to the data operation request is in the data node 30, the data fragment is processed locally, and the local data is read and written; and if the data fragment corresponding to the data operation request is not in the data node 30, forwarding the data fragment to the corresponding node for processing.
FIG. 3 is a flowchart illustrating a preferred embodiment of a data node discovery process in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S201: the control node 10 monitors whether a data node 30 is newly added or deleted in the system in real time, and if the data node 30 is newly added, the step S202 is executed; if the data node 30 is found to be deleted, step S203 is executed;
the newly added data node is the newly added data node;
the deleted data node includes: the data nodes needing to be deleted because the burden is less than the preset value and the data nodes needing to be deleted because of receiving a user deleting instruction.
Step S202: executing node capacity expansion operation, and finishing the current discovery processing;
the node capacity expansion operation is specifically shown in fig. 6;
step S203: and executing the node capacity reduction operation, and finishing the current discovery processing.
The node capacity reduction operation is shown in detail in fig. 7.
Fig. 4 is a flowchart illustrating a preferred embodiment of a data node state management process in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S301: the control node 10 receives the self state reported by the data node 30;
step S302: checking the state, if the state is normal, finishing the current state processing; if the result is abnormal, step S303 is executed;
step S303: the status of the data node 30 in the data route is updated and the updated data route is broadcast.
FIG. 5 is a flow chart of a preferred embodiment of data replication in the adaptive method of a distributed database system according to the present invention; the embodiment comprises the following steps:
step S301: the data node 30 executing the write operation records the data segment changed data or the full data of the current write operation;
step S302: searching data nodes 30 where the rest copies of the data fragments are located in a data route of a local cache;
step S303: and copying the data changed by the data fragment or the full data to the data node 30 where the rest copies of the data fragment are located.
The data node 30 where the changed data or the full data is copied to other copies of the same segment includes allowing the data node 30 storing the first copy to write data and then copying the changed data or the full data to the data node 30 where the second and third copies of the segment are located, and also allowing the data node 30 storing the second and third copies to write data and then copying the changed data or the full data to the data node 30 where the first and third copies or the first and second copies of the segment are located, that is, allowing mutual copying among data copies, and solving a conflict problem that the same data among copies of the same segment may be copied to each other by comparing update time stamps of the data, that is, determining whether to change the data by merging and overwriting or abandon the change.
In the data replication process, the data nodes of the replicated data can synchronously complete corresponding data updating or asynchronously complete corresponding data updating.
Fig. 6 is a flowchart of a preferred embodiment of a node capacity expansion operation in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S401: the control node 10 calculates a first copy data fragment list and a second copy data fragment list to be migrated to the newly added data node 30; the method specifically comprises the following steps:
dividing the total number of the data fragments by the total number of the data nodes including the newly added data node 30 to calculate the average number of the data fragments to be stored by each data node, wherein the average number of the data fragments is less than the current number of the data fragments of the original data node 30;
subtracting the calculated average data fragment number from the current data fragment number of each original data node 30, and calculating the data fragment number to be migrated from each original data node 30 to the newly added data node 30;
a first copy of all data fragments to be migrated from the original data node 30 forms a first copy data fragment list of the newly added data node 30, and a second copy of all data fragments to be migrated from the original data node 30 forms a second copy data fragment list of the newly added data node 30; the data in the list at this time is empty;
step S402: distributing a third copy for the data fragment to be migrated on the newly added data node 30; re-routing and broadcasting data of the system;
step S403: waiting for the newly added data node 30 to recover the data;
the data node data recovery process is shown in fig. 8;
step S404: receiving the self state reported by the newly added data node 30, recalculating the data route of the system according to a preset capacity expansion rule and broadcasting;
the preset capacity expansion rule is as follows:
informing the original data node 30 to switch the first copy of the data fragment to be locally migrated to the newly added data node 30 into the third copy; meanwhile, the newly added data node is informed to switch the third copy of the corresponding data fragment into the first copy;
informing the original data node 30 to switch the second copy of the data fragment to be locally migrated to the newly added data node 30 into the third copy; and simultaneously informs the newly added data node 30 to switch the third copy of the corresponding data slice to the second copy.
Step S405: notifying all data nodes 30 to delete the third copy of all local data fragments;
step S406: and after the deletion of all the data nodes 30 is confirmed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Fig. 7 is a flowchart of a preferred embodiment of a node capacity reduction operation in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S501: the control node 10 calculates a first replica data fragment list and a second replica data fragment list of each remaining data node 30; the method specifically comprises the following steps:
dividing the total number of the data fragments by the number of the remaining data nodes 30 to calculate the average number of the data fragments to be stored by each data node 30 in the remaining data nodes 30, wherein the average number of the data fragments to be stored is more than that before the nodes are reduced;
subtracting the current data fragment number of each residual data node 30 from the average data fragment number, and calculating the data fragment number to be migrated from the node to be closed on each residual data node 30;
according to a preset data fragment distribution principle, distributing a first copy and a second copy of a data fragment on a data node 30 to be deleted to the remaining data nodes 30 to obtain a first copy data fragment list and a second copy data list on each remaining node;
step S502: distributing a third copy for the data fragment to be migrated on the remaining data nodes 30, recalculating the data route of the system and broadcasting;
step S503: waiting for the remaining data nodes 30 to recover data;
the data node 30 data recovery process is shown in fig. 8;
step S504: waiting for the remaining data nodes 30 to copy the data;
the data node 30 replicates the data process as shown in fig. 5;
step S505: receiving the self state reported by the residual data nodes 30, recalculating the data route of the system and broadcasting according to the preset capacity reduction rule;
the preset capacity reduction rule is as follows:
informing the data node to be deleted 30 to switch the first copy of the data to be migrated into the third copy; simultaneously informing the remaining data nodes 30 storing the third copy of the data fragment to switch the third copy of the data fragment into the first copy;
notifying the to-be-deleted data node 30 to switch the second copy of the to-be-migrated data segment to the third copy; and simultaneously informing the remaining data nodes 30 storing the third copy of the data fragment to switch the third copy of the data fragment into the second copy.
Step S506: notifying all data nodes 30 to delete the third copy of all local data fragments;
step S507: and after the deletion of all the data nodes 30 is confirmed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
Fig. 8 is a flowchart illustrating a preferred embodiment of a data node data recovery process in the adaptive method for a distributed database system according to the present invention; the embodiment comprises the following steps:
step S601: inquiring a local data route, and acquiring a data node 30 where a third copy of the first copy data fragment is located on the node;
step S602: copying the corresponding data fragment to the data node 30 where the third copy is located;
the data node 30 which receives the data fragment stores the received data fragment into the corresponding third copy;
step S603: and after all the first copy data fragments are recovered, reporting the self state to the control node 10.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or process changes made by using the contents of the specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (24)

1. A distributed database system is characterized in that the system comprises a control node, a client API and a data node, wherein the control node, the client API and the data node are connected with the data node
The control node is used for managing the data nodes of the system, calculating the data route of the system and broadcasting the data route to the client API and the data nodes;
the client API is used for providing a read/write data interface for a data visitor and forwarding the received data operation request to a corresponding data node according to a locally cached data route;
the data node is used for storing the data fragments and processing the received data operation request according to the data route of the local cache;
the client API is used for calculating data fragments corresponding to the request data according to the data keywords in the received data operation request, and searching the data node where each data fragment is located in the data route of the local cache; forwarding the data operation request to a corresponding data node according to a data node selection rule cached locally;
the data node is used for searching whether the data fragments in the data operation request are stored in the data node or not in a data route of a local cache after the data operation request is received; when the data fragment is not stored in the data node, searching the data node where the data fragment is located in a data route of a local cache, and forwarding the data operation request to the found data node; and when the data fragment is stored in the data node, executing the data operation request and returning a data operation response to a data visitor.
2. The distributed database system of claim 1, wherein the data nodes are deployed in the system as virtual machines or as compute storage hosts.
3. The distributed database system of claim 1, wherein the client AP is operated by data visitors in a dynamic library or plug-in manner.
4. The distributed database system of any of claims 1-3,
the control node is used for monitoring the number and state change of the data nodes in the system in real time and executing node capacity expansion/reduction operation when the number of the data nodes changes; and when the state of the data node changes, updating the state of the corresponding data node in the data route and broadcasting the updated data route.
5. The distributed database system of claim 1,
the data node is used for periodically reporting the self state to the control node; when the link changes, reporting the self state to the control node in real time;
and the control node is used for periodically updating the data route.
6. The distributed database system of claim 1, wherein the data nodes are configured to perform data recovery operations and data replication operations;
and the control node is used for carrying out domain division on the data node according to a preset domain division rule.
7. An adaptive method of a distributed database system, the method comprising, after the system is powered on, performing the steps of:
controlling the data routing of the node computing system and broadcasting the data routing to the client API and all the data nodes;
a client API receives a data operation request of an accessor, and forwards the request to a corresponding data node according to a data route of a local cache;
the data node processes the received data operation request and returns a data operation response to the visitor;
the step of forwarding the request to the corresponding data node by the client API according to the data route of the local cache specifically includes:
calculating corresponding data fragments according to the data keywords in the data operation request;
searching a data node corresponding to each data fragment in a data route of a local cache;
respectively forwarding the data operation requests to the found data nodes according to a preset data node selection rule;
the data node processes the received data operation request by the following method:
searching whether the data fragments in the data operation request are stored in the data node or not in the data route of the local cache; if yes, executing the data operation request, and returning a data operation response to the data accessor; otherwise, searching the data node where the data fragment is located in the data route of the local cache, and forwarding the data operation request to the found data node.
8. The adaptive method for a distributed database system according to claim 7, wherein the control node further performs the following steps prior to data routing for the computing system:
and carrying out domain division on the data nodes according to a preset domain division rule.
9. The adaptive method for a distributed database system according to claim 8, wherein the domain-dividing rule is: if the number of hosts/servers to which the data node belongs is 1, dividing the data node into a left domain or a right domain; if the number of the hosts/servers to which the data nodes belong is more than or equal to 2, dividing the data nodes into a left domain and a right domain according to the principle of uniform distribution of the hosts/servers to which the data nodes belong, and enabling the data nodes belonging to the same host/server to be located in the same domain.
10. The adaptive method for a distributed database system according to claim 7 or 8, wherein the control node calculates the number of data fragments to be distributed on each data node according to the number of data nodes and the number of data fragments of the system, and generates a data route.
11. The adaptive method for a distributed database system according to claim 7, wherein the data node selection rule is:
when the number of the data nodes corresponding to the searched data fragments is 1, directly forwarding the data operation request to the data nodes;
when the number of data nodes corresponding to the searched data fragment is larger than 1, judging the type of the data operation request, if the data operation request is write operation, checking the copy number of the data fragment in each data node and the state of the data node, and sending the data operation request to the data node with a normal state and a small copy number; and if the data operation request is read operation, sending the data operation request to the data node with the minimum load.
12. The adaptive method for a distributed database system according to claim 7, wherein the request to perform a data operation is specifically:
when the data operation request is write operation, adding, modifying or deleting the copy of the data fragment stored locally according to the operation mode of the visitor;
and when the data operation request is a read operation, reading data from the copy of the data fragment stored locally.
13. The adaptive method for a distributed database system according to claim 12, wherein when the data operation request is a write operation, after the data operation request is processed, a data replication process is performed, specifically:
recording data changed by the data fragments or full data;
and searching data nodes where the rest copies of the data fragments are located in the data route of the local cache, and copying the data or the full data changed by the data fragments to the data nodes where the rest copies of the data fragments are located.
14. The adaptive method of a distributed database system according to claim 7 or 8, wherein the control node further performs the following steps during the operation of the system:
whether data nodes are newly added or deleted in the real-time monitoring system or not is judged, and if the data nodes are newly added, node expansion operation is executed; and if the data node is deleted, executing the node capacity reduction operation.
15. The adaptive method for a distributed database system according to claim 14, wherein the node capacity expansion operation specifically includes the steps of:
calculating a first copy data fragment list and a second copy data fragment list to be migrated to the newly added data node;
distributing a third copy for the data fragment to be migrated on the newly added data node, recalculating the data route of the system and broadcasting;
waiting for the newly added data node to recover the data;
receiving the self state reported by the newly added data node, recalculating the data route of the system according to a preset capacity expansion rule and broadcasting;
informing all data nodes to delete the third copies of all local data fragments;
and after the deletion of all the data nodes is confirmed to be completed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
16. The adaptive method for a distributed database system according to claim 15, wherein the step of calculating the first replica data fragment list and the second replica data fragment list to be migrated to the newly added data node comprises:
dividing the total number of the data fragments by the total number of the data nodes including the newly added data nodes, and calculating the average number of the data fragments to be stored by each data node;
subtracting the calculated average data fragment number from the current data fragment number of each data node, and calculating the data fragment number to be transferred from each original data node to the newly added data node;
and the first copies of all the data fragments to be migrated from the original data nodes form a first copy data fragment list of the newly added data nodes, and the second copies of all the data fragments to be migrated from the original data nodes form a second copy data fragment list of the newly added data nodes.
17. The adaptive method for a distributed database system according to claim 15, wherein the preset capacity expansion rule is:
informing the original data node to switch the first copy of the data fragment to be locally migrated to the newly added data node into the third copy; meanwhile, the newly added data node is informed to switch the third copy of the corresponding data fragment into the first copy;
informing the original data node to switch the second copy of the data fragment to be locally migrated to the newly added data node into a third copy; and simultaneously informing the newly added data node to switch the third copy of the corresponding data fragment into the second copy.
18. The adaptive method for a distributed database system according to claim 14, wherein the node capacity reduction operation comprises the steps of:
calculating a first copy data fragment list and a second copy data fragment list on each remaining node;
distributing a third copy for the data fragment to be migrated on the rest data nodes, recalculating the data route of the system and broadcasting;
waiting for the other data nodes to recover the data;
waiting for other data nodes to copy data;
receiving self states reported by other data nodes, recalculating the data route of the system and broadcasting according to a preset capacity reduction rule;
informing all data nodes to delete the third copies of all local data fragments;
and after the deletion of all the data nodes is confirmed to be completed, deleting the third copy in the local data route, recalculating the data route of the system and broadcasting.
19. The adaptive method for a distributed database system according to claim 18, wherein the step of calculating the first replica data fragment list and the second replica data fragment list on each of the remaining nodes is specifically:
dividing the total number of the data fragments by the number of the remaining data nodes, and calculating the average number of the data fragments to be stored by each data node in the remaining data nodes;
subtracting the current data fragment number on each residual data node from the average data fragment number, and calculating the data fragment number to be migrated from the node to be closed on each residual data node;
according to a preset data fragment distribution principle, distributing a first copy and a second copy of a data fragment on a data node to be deleted to the remaining data nodes to obtain a first copy data fragment list and a second copy data fragment list on each remaining node.
20. The adaptive method for a distributed database system according to claim 18, wherein the preset capacity reduction rule is:
informing the data node to be deleted to switch the first copy of the data fragment to be migrated into the third copy; simultaneously informing the residual data nodes storing the third copy of the data fragment to switch the third copy of the data fragment into the first copy;
informing the data node to be deleted to switch the second copy of the data fragment to be migrated into the third copy; and simultaneously informing the residual data nodes storing the third copy of the data fragment to switch the third copy of the data fragment into the second copy.
21. An adaptive method for a distributed database system according to claim 19, wherein the data shard distribution principle is:
the number of data fragments on each data node is the same as much as possible; and is
The first copy and the second copy of each data fragment are distributed on data nodes of different domains; and
the second copies of all the first replica data fragments on each data node are evenly distributed on all the data nodes in different domains.
22. An adaptive method for a distributed database system according to claim 17 or 18, wherein the data nodes recover data by:
inquiring a local data route, and acquiring a data node where a third copy of the first copy data fragment on the node is located;
copying corresponding data fragments to the data node where the third copy is located;
and after the recovery is finished, reporting the self state to the control node.
23. The adaptive method for a distributed database system according to claim 14,
the newly added data node is a data node newly added into the system;
the deleted data node includes: the data nodes needing to be deleted because the burden is less than the preset value and the data nodes needing to be deleted because of receiving a user deleting instruction.
24. The adaptive method for a distributed database system of claim 7, wherein the client API determines the number of fragments requesting data by taking a HASH value for the data key and then taking a modular value of the total number of data fragments for the HASH value.
CN201510890348.7A 2015-12-07 2015-12-07 Distributed database system and self-adaptive method thereof Active CN106844399B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510890348.7A CN106844399B (en) 2015-12-07 2015-12-07 Distributed database system and self-adaptive method thereof
PCT/CN2016/103964 WO2017097059A1 (en) 2015-12-07 2016-10-31 Distributed database system and self-adaptation method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510890348.7A CN106844399B (en) 2015-12-07 2015-12-07 Distributed database system and self-adaptive method thereof

Publications (2)

Publication Number Publication Date
CN106844399A CN106844399A (en) 2017-06-13
CN106844399B true CN106844399B (en) 2022-08-09

Family

ID=59012671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510890348.7A Active CN106844399B (en) 2015-12-07 2015-12-07 Distributed database system and self-adaptive method thereof

Country Status (2)

Country Link
CN (1) CN106844399B (en)
WO (1) WO2017097059A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106844399B (en) * 2015-12-07 2022-08-09 中兴通讯股份有限公司 Distributed database system and self-adaptive method thereof
CN107273187A (en) * 2017-06-29 2017-10-20 深信服科技股份有限公司 Reading position acquisition methods and device, computer installation, readable storage medium storing program for executing
CN107579865A (en) * 2017-10-18 2018-01-12 北京奇虎科技有限公司 Right management method, the apparatus and system of distributed code server
CN108073696B (en) * 2017-12-11 2020-10-27 厦门亿力吉奥信息科技有限公司 GIS application method based on distributed memory database
CN108319656A (en) * 2017-12-29 2018-07-24 中兴通讯股份有限公司 Realize the method, apparatus and calculate node and system that gray scale is issued
CN108845892A (en) * 2018-04-19 2018-11-20 北京百度网讯科技有限公司 Data processing method, device, equipment and the computer storage medium of distributed data base
CN108737534B (en) * 2018-05-11 2021-08-24 北京奇虎科技有限公司 Block chain-based data transmission method and device and block chain system
CN108664222B (en) * 2018-05-11 2020-05-15 北京奇虎科技有限公司 Block chain system and application method thereof
CN108712488B (en) * 2018-05-11 2021-09-10 北京奇虎科技有限公司 Data processing method and device based on block chain and block chain system
CN108881415B (en) * 2018-05-31 2020-11-17 广州亿程交通信息集团有限公司 Distributed real-time big data analysis system
CN109189561A (en) * 2018-08-08 2019-01-11 广东亿迅科技有限公司 A kind of transacter and its method based on MPP framework
CN109933568A (en) * 2019-03-13 2019-06-25 安徽海螺集团有限责任公司 A kind of industry big data platform system and its querying method
CN110175069B (en) * 2019-05-20 2023-11-14 广州南洋理工职业学院 Distributed transaction processing system and method based on broadcast channel
CN112214466A (en) * 2019-07-12 2021-01-12 海能达通信股份有限公司 Distributed cluster system, data writing method, electronic equipment and storage device
CN111090687B (en) * 2019-12-24 2023-03-10 腾讯科技(深圳)有限公司 Data processing method, device and system and computer readable storage medium
WO2021147926A1 (en) * 2020-01-20 2021-07-29 Huawei Technologies Co., Ltd. Methods and systems for hybrid edge replication
CN111291124A (en) * 2020-02-12 2020-06-16 杭州涂鸦信息技术有限公司 Data storage method, system and equipment thereof
CN111400112B (en) * 2020-03-18 2021-04-13 深圳市腾讯计算机系统有限公司 Writing method and device of storage system of distributed cluster and readable storage medium
CN111538772B (en) * 2020-04-14 2023-07-04 北京宝兰德软件股份有限公司 Data exchange processing method and device, electronic equipment and storage medium
CN111338806B (en) * 2020-05-20 2020-09-04 腾讯科技(深圳)有限公司 Service control method and device
CN111835848B (en) * 2020-07-10 2022-08-23 北京字节跳动网络技术有限公司 Data fragmentation method and device, electronic equipment and computer readable medium
CN113312005B (en) * 2021-06-22 2022-11-01 青岛理工大学 Block chain-based Internet of things data capacity expansion storage method and system and computing equipment
CN113535656B (en) * 2021-06-25 2022-08-09 中国人民大学 Data access method, device, equipment and storage medium
CN114237520B (en) * 2022-02-28 2022-07-08 广东睿江云计算股份有限公司 Ceph cluster data balancing method and system
CN117667944A (en) * 2023-12-12 2024-03-08 支付宝(杭州)信息技术有限公司 Copy capacity expansion method, device and system for distributed graph database

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103095806A (en) * 2012-12-20 2013-05-08 中国电力科学研究院 Load balancing management system of large-power-network real-time database system
CN103324539A (en) * 2013-06-24 2013-09-25 浪潮电子信息产业股份有限公司 Job scheduling management system and method
CN103475566A (en) * 2013-07-10 2013-12-25 北京发发时代信息技术有限公司 Real-time message exchange platform and distributed cluster establishment method
CN103838770A (en) * 2012-11-26 2014-06-04 中国移动通信集团北京有限公司 Logic data partition method and system
CN104317899A (en) * 2014-10-24 2015-01-28 西安未来国际信息股份有限公司 Big-data analyzing and processing system and access method
CN104333512A (en) * 2014-10-30 2015-02-04 北京思特奇信息技术股份有限公司 Distributed memory database access system and method
CN105007238A (en) * 2015-07-22 2015-10-28 中国船舶重工集团公司第七0九研究所 Implementation method and system for lightweight cross-platform message-oriented middle-ware
WO2017097059A1 (en) * 2015-12-07 2017-06-15 中兴通讯股份有限公司 Distributed database system and self-adaptation method therefor

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7761407B1 (en) * 2006-10-10 2010-07-20 Medallia, Inc. Use of primary and secondary indexes to facilitate aggregation of records of an OLAP data cube
CN104380690B (en) * 2012-06-15 2018-02-02 阿尔卡特朗讯 Framework for the intimacy protection system of recommendation service
CN103780482B (en) * 2012-10-22 2017-06-27 华为技术有限公司 One kind obtains content and method and user equipment, cache node
CN103078927B (en) * 2012-12-28 2015-07-22 合一网络技术(北京)有限公司 Key-value data distributed caching system and method thereof
CN104283906B (en) * 2013-07-02 2018-06-19 华为技术有限公司 Distributed memory system, clustered node and its section management method
CN103516809A (en) * 2013-10-22 2014-01-15 浪潮电子信息产业股份有限公司 High-scalability and high-performance distributed storage system structure
CN103870602B (en) * 2014-04-03 2017-05-31 中国科学院地理科学与资源研究所 Database space burst clone method and system
CN104239417B (en) * 2014-08-19 2017-06-09 天津南大通用数据技术股份有限公司 Dynamic adjusting method and device after a kind of distributed data base data fragmentation
CN104615657A (en) * 2014-12-31 2015-05-13 天津南大通用数据技术股份有限公司 Expanding and shrinking method for distributed cluster with nodes supporting multiple data fragments

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838770A (en) * 2012-11-26 2014-06-04 中国移动通信集团北京有限公司 Logic data partition method and system
CN103095806A (en) * 2012-12-20 2013-05-08 中国电力科学研究院 Load balancing management system of large-power-network real-time database system
CN103324539A (en) * 2013-06-24 2013-09-25 浪潮电子信息产业股份有限公司 Job scheduling management system and method
CN103475566A (en) * 2013-07-10 2013-12-25 北京发发时代信息技术有限公司 Real-time message exchange platform and distributed cluster establishment method
CN104317899A (en) * 2014-10-24 2015-01-28 西安未来国际信息股份有限公司 Big-data analyzing and processing system and access method
CN104333512A (en) * 2014-10-30 2015-02-04 北京思特奇信息技术股份有限公司 Distributed memory database access system and method
CN105007238A (en) * 2015-07-22 2015-10-28 中国船舶重工集团公司第七0九研究所 Implementation method and system for lightweight cross-platform message-oriented middle-ware
WO2017097059A1 (en) * 2015-12-07 2017-06-15 中兴通讯股份有限公司 Distributed database system and self-adaptation method therefor

Also Published As

Publication number Publication date
WO2017097059A1 (en) 2017-06-15
CN106844399A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106844399B (en) Distributed database system and self-adaptive method thereof
US9904599B2 (en) Method, device, and system for data reconstruction
EP2498476B1 (en) Massively scalable object storage system
JP5952960B2 (en) Computer system, computer system management method and program
US8396936B2 (en) Computer system with cooperative cache
JP5701398B2 (en) Computer system, data management method and program
US9262323B1 (en) Replication in distributed caching cluster
US9367261B2 (en) Computer system, data management method and data management program
JP2004334574A (en) Operation managing program and method of storage, and managing computer
US20120278344A1 (en) Proximity grids for an in-memory data grid
JP5724735B2 (en) Database update control device, database management system, and database update control program
CN113268472B (en) Distributed data storage system and method
CN112199427A (en) Data processing method and system
CN113010496A (en) Data migration method, device, equipment and storage medium
US20210232314A1 (en) Standby copies withstand cascading fails
US20220391411A1 (en) Dynamic adaptive partition splitting
CN111400285A (en) MySQ L data fragment processing method, apparatus, computer device and readable storage medium
JP2015064850A (en) Database monitoring device, database monitoring method, and computer program
JP5098700B2 (en) File exchange apparatus and file exchange method for information communication system
JP6007340B2 (en) Computer system, computer system management method and program
CN106534285A (en) Access method and device
JP5956364B2 (en) Cluster system
JP2015149076A (en) Management device, management system, and data management method
US20230342260A1 (en) Capacity-based redirection efficiency and resiliency
JP5713412B2 (en) Management device, management system, and management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant