WO2021082465A1 - Procédé pour assurer la cohérence de données et dispositif associé - Google Patents

Procédé pour assurer la cohérence de données et dispositif associé Download PDF

Info

Publication number
WO2021082465A1
WO2021082465A1 PCT/CN2020/096005 CN2020096005W WO2021082465A1 WO 2021082465 A1 WO2021082465 A1 WO 2021082465A1 CN 2020096005 W CN2020096005 W CN 2020096005W WO 2021082465 A1 WO2021082465 A1 WO 2021082465A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
data
metadata
identifier
cluster
Prior art date
Application number
PCT/CN2020/096005
Other languages
English (en)
Chinese (zh)
Inventor
孟俊才
徐鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021082465A1 publication Critical patent/WO2021082465A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Definitions

  • the invention relates to the technical field of computer distributed storage systems, in particular to a method and related equipment for ensuring data consistency.
  • the Raft protocol is a distributed consensus protocol that adopts the plan of the elite to lead the overall situation. In the entire cluster, only the master node can process requests sent by clients, and other nodes must forward them to the master node for processing even if they receive requests.
  • the Raft protocol strongly relies on the master node to ensure cluster data consistency.
  • distributed locks have a very wide range of usage scenarios. For example, in a distributed system, when different devices access a shared resource, the system often needs a distributed lock to support the mutual exclusion of access to the shared resource to ensure consistency, that is, only one node can hold the lock.
  • the lock preemption can guarantee the uniqueness of the master node.
  • the embodiment of the invention discloses a method and related equipment for ensuring data consistency, which can ensure data consistency and avoid data conflicts without loss of database performance.
  • the present application provides a method for ensuring data consistency, including: a first node receives an upgrade message sent by a node management server, the node management server is used to manage a node cluster, and the node cluster includes the first node A node; the first node updates the tenure management data, the tenure management data includes root metadata identification and tenure identification, the root metadata identification is used to determine root metadata, the root metadata is used to manage the Metadata corresponding to the node cluster, the term identifier is used to indicate that the first node is upgraded to the master node of the node cluster; the first node sets the data corresponding to the node cluster to read-only mode, and The data includes the root metadata.
  • the first node upgrades to the master node after receiving the upgrade message sent by the node management server, updates the tenure management data, and sets the data corresponding to the node cluster to read-only mode to ensure that at the same time, Only one node can write data, thereby ensuring data consistency and avoiding data conflicts.
  • the entire process does not need to negotiate with other nodes, the performance of the system is guaranteed.
  • the first node updates the tenure identifier while reading the root metadata identifier.
  • the first node guarantees that reading the root metadata identifier and updating the tenure identifier are atomic, which can prevent other nodes, such as the original master node, from concurrently modifying the root metadata identifier during this process, resulting in data conflicts. Lead to data inconsistencies.
  • the node cluster further includes a second node, and the second node is used to read and write data corresponding to the node cluster and update the root element Data identification; after the first node updates the tenure management data, the root metadata identification is locked, and the root metadata identification is prohibited from being updated by the second node.
  • the second node before the first node updates the tenure management data, the second node (for example, the original master node) can read and write the data corresponding to the node cluster, and can update the root metadata identification, but the tenure is updated on the first node After managing the data, the root metadata identifier will be locked. Although the second node can continue to read and write data, it is not allowed to modify the root metadata identifier. This ensures that in the subsequent process, only one node will be able to exist at any one time. Write data to ensure data consistency.
  • the data corresponding to the node cluster includes root metadata, metadata, and user data
  • the metadata is used to manage the user data
  • the User data is data written to the node cluster; after the first node sets the data corresponding to the node cluster to read-only mode, sets the metadata to read-only mode, and finally sets the user data It is read-only mode.
  • the first node is set in layers, and the root metadata, metadata, and user data are set to read-only mode in turn to ensure the consistency of the data corresponding to the node cluster and ensure that the first node can accurately Find all user data written to the node cluster.
  • the first node updates the root metadata identifier and writes data to the node cluster.
  • the first node after the first node is upgraded to become the new master node, it can write user data to the node cluster, and can manage the written user data by updating the root metadata identifier.
  • the present application provides a first node, including: a receiving module, configured to receive an upgrade message sent by a node management server, where the node management server is used to manage a node cluster, and the node cluster includes the first node.
  • Node update module, used to update tenure management data, the tenure management data includes root metadata identification and tenure identification, the root metadata identification is used to determine root metadata, the root metadata is used to manage the node Metadata corresponding to the cluster, the tenure identifier is used to indicate that the first node is upgraded to the master node of the node cluster; a processing module is used to set the data corresponding to the node cluster to a read-only mode, the data Including the root metadata.
  • the update module is further configured to read the root metadata identifier and update the tenure identifier at the same time.
  • the node cluster further includes a second node, and the second node is used to read and write data corresponding to the node cluster and update the root element Data identification; after the update module updates the tenure management data, the root metadata identification is prohibited from being updated by the second node.
  • the data corresponding to the node cluster includes root metadata, metadata, and user data
  • the metadata is used to manage the user data
  • the User data is data written to the node cluster
  • the processing module is also used to set the root metadata to read-only mode, then set the metadata to read-only mode, and finally set the user data Set to read-only mode.
  • the processing module is further configured to update the root metadata identifier and write data to the node cluster.
  • the present application provides a computing device, the computing device includes a processor and a memory, the memory is configured to store program code, and the processor is configured to call the program code in the memory to execute the above-mentioned first aspect And a method combining any one of the above-mentioned first aspects.
  • the present application provides a computer-readable storage medium that stores a computer program.
  • the computer program When the computer program is executed by a processor, it can implement the above-mentioned first aspect and in combination with the above-mentioned first aspect.
  • the process of the method provided by any implementation method.
  • the present application provides a computer program product, the computer program product includes instructions, when the computer program is executed by a computer, the computer can execute the first aspect and any one of the first aspects mentioned above.
  • the flow of the method provided by the method is not limited to:
  • FIG. 1 is a schematic diagram of a node state switching provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for ensuring data consistency provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of a data storage relationship provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a first node provided by an embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • Cloud database is a stable, reliable, and elastically scalable online database service. This database is deployed in a virtual computing environment and managed through a unified management system, which can effectively reduce maintenance costs.
  • the computing and storage of the cloud database are separated.
  • the computing layer database does not store user data, but is only responsible for computing.
  • the storage layer uses a new storage system as a shared storage system.
  • Atomic write means that all write operations in an inseparable transaction must end or roll back together, and are indivisible.
  • Traditional storage systems manage data in units of blocks (for example, a block is 512 bytes).
  • a certain write will partially succeed and partially fail; or in a concurrent scenario, two threads
  • the written data overwrites each other. For example, one thread needs to write 123 and one thread needs to write 456. Concurrency may cause the write to 126.
  • the function of atomic write is to ensure that either all writes succeed or all fail at one time, so the above situation can be avoided.
  • Append only storage is a file system customized for new storage hardware such as solid state drive (SSD).
  • This file system provides append, delete, seal, etc. Basic operation, modification operation is not supported.
  • the seal operation is an operation peculiar to the append only storage system, that is, a file is set to a read-only state and cannot be added.
  • data is divided into user data and metadata.
  • User data refers to the data that users really need to read and write. It is stored in fixed-size folders. Each folder corresponds to a unique identification (identification, ID). ).
  • Metadata is the system data used to describe and manage the characteristics of a file, such as access permissions, file owner, file data block distribution information, etc.
  • the distribution information includes the location of the file on the disk and the location of the disk. The location in the cluster. Users who need to manipulate a file must first get its metadata before they can locate the location of the file and get the content or related attributes of the file.
  • the master node in order to ensure the uniqueness of data writing, can be selected by using a distributed consistency protocol, and then the master node will write the data, so as to ensure that only one node can write at any time Import data to ensure data consistency.
  • each node has three states, namely the standby state, the election state, and the master node state. All nodes are in the standby state at startup. If the message (such as heartbeat information) from the master node is not received within a preset time (for example, within 2 mental states), a state switch will occur, and the standby state will be switched to election status. When a node is in an election state, it will vote for itself first, and then pull votes from other nodes, that is, request other nodes to vote for itself so that it becomes the master node. If a node gets more than half of the total number of nodes in the cluster, then that node will become the new primary node, and other nodes will be switched to the standby state.
  • the node in the master node state is the master node of the entire cluster, and all operations such as adding, modifying, and deleting system data can only be completed through the master node.
  • the uniqueness of the host can also be guaranteed through lock preemption.
  • each node applies for a distributed lock from zookeeper.
  • Zookeerer can authorize the use of locks to only one node based on first-come, first-served or weight distribution, and the node that finally obtains the lock can write data.
  • the distributed lock requires a long time and occupies a large bandwidth. In actual use, it is often necessary to negotiate between multiple nodes, and the performance of the entire system will suffer severe losses.
  • this application provides a method and related equipment for ensuring data consistency, which can ensure data consistency and avoid data conflicts without losing database performance.
  • the node may be a container, a virtual machine, a physical machine, etc., which is not limited in this application.
  • the distributed storage system 200 includes a node management server 210, a node cluster 220, and a storage device 230.
  • the node cluster 220 includes a master node 221, a backup node 222, a backup node 223, and a backup node 224. It should be understood that the node cluster 220 may also include more or less nodes, and the description is here by taking 4 nodes as an example.
  • the storage device 230 includes a tenure management data storage unit 2310 and other data storage units 2320.
  • the tenure management data storage unit 2310 is used to store the tenure identifier and root metadata identifier of the master node; the other data storage unit 2320 is used to store root metadata and metadata. Data and user data.
  • the node management server 210 is used to monitor the nodes in the node cluster 220. When it is monitored that the master node 221 is abnormal, a backup node is selected, for example, the backup node 222 is selected to be upgraded to a new master node.
  • the master node 221 works in a readable mode and a write mode, that is, the master node 221 can read data in the storage device 230 and can also write data to the storage device 230.
  • the standby node 222, the standby node 223, and the standby node 224 work in The readable mode, that is, only the data in the storage device 230 can be read, and the data cannot be written.
  • the term identifier of the primary node in the tenure management data storage unit 2310 can be updated, and data can be written to other data storage units 2320.
  • the original primary node 221 is in the standby node. After updating the tenure identifier in 222, it can be determined that a new master node currently exists, and the original master node 221 can no longer write data, and can commit suicide or switch the working mode to read-only mode.
  • the storage device 230 stores the tenure identifier, which can ensure that the original master node can recognize that a new master node is generated.
  • the original master node can avoid writing data with the new master node at the same time by suicide or switching working modes, so as to avoid causing data. Conflict to ensure data consistency.
  • the method for ensuring data consistency includes but is not limited to the following steps:
  • S301 The first node receives the upgrade message sent by the node management server.
  • the first node may specifically be a virtual machine or a container, etc., running in a physical machine.
  • Multiple nodes form a cluster.
  • there is only one primary node in the cluster and other nodes are standby nodes.
  • the primary node can write data, and other standby nodes cannot write data to ensure data. consistency.
  • a cluster composed of multiple nodes can be deployed in a cloud environment, specifically one or more computing devices in the cloud environment (such as a central server); it can also be deployed in an edge environment, specifically one or more in the edge environment
  • the edge computing device may be a server.
  • the cloud environment refers to the central computing equipment cluster owned by the cloud service provider and used to provide computing, storage, and communication resources;
  • the edge environment refers to the geographically far away from the central cloud environment, which is used to provide computing and storage.
  • the edge computing equipment cluster of communication resources are used to provide computing and storage.
  • the first node may be any backup node in the node cluster, such as the above-mentioned backup node 222. During the operation of the first node, if it receives an upgrade message sent by the node management server, it indicates that the current master node may exist If it fails or is abnormal, the first node needs to be upgraded to the new master node.
  • S302 The first node updates the tenure management data.
  • the first node needs to update the tenure management data after determining to upgrade to become the new master node.
  • the tenure management data includes a root metadata identifier and a tenure identifier.
  • the root metadata identifier is used to determine root metadata. That is, the root metadata identifier can be used to determine the specific location where the data is stored in the storage device.
  • the tenure identifier is used It characterizes that the first node is upgraded to become a new master node, that is, the term identifier will change with the change of the master node. Whenever the node management server determines a new master node, the determined new master node The tenure indicator will be updated.
  • the term of office is identified as 5, that is, the cluster has produced 5 master nodes.
  • the new master node will take the term of office
  • the identifier is updated to 6, indicating that the new master node is the sixth master node generated by the cluster.
  • the data written by users is eventually written into files of a fixed size, and each file is assigned a unique identifier. With the increase of written data, more files are needed. In order to manage these files, some specific data need to be used. These specific data are called metadata, and the metadata includes the identification of these files. Each metadata also has a unique identifier. Similarly, in order to facilitate the management of metadata, a root file needs to be used. The root file is also called root metadata. The root metadata includes all metadata identifiers.
  • FIG. 4 Exemplarily, refer to Figure 4.
  • all user data is stored in different files, such as file 1.1, file 1.2, etc.
  • One metadata manages multiple files, for example, metadata 1 manages file 1.1 , File 1.2, ...file 1.N, metadata 2 manages file 2.1, file 2.2, ...file 2.N, all metadata is managed by root metadata, there is only one root metadata, and root metadata corresponds to one
  • the identifier, the identifier and the tenure identifier belong to tenure management data and are stored together in the tenure management data unit.
  • the original master node can be identified and a new master node can be determined to avoid writing data again, data conflicts can be avoided, and data consistency can be ensured.
  • the first node updates the tenure identifier while reading the root metadata identifier.
  • the first node must ensure that reading root metadata and updating the tenure identifier occur at the same time, that is, the operation is atomic, otherwise the operation is abandoned and the execution continues again.
  • the original master node may have failed due to network fluctuations and other reasons, and the node management server has re-elected the first node as the new master node. However, the original master node may return to normal after a while, but The original master node cannot perceive that a new master node has appeared. At this time, there may be concurrency, that is, the original master node may continue to write data and update the root metadata identifier. If the first node does not read the root metadata and update the tenure identifier at the same time, there is a period of time between these two operations. For example, the first node reads the root metadata identifier first, and then updates the tenure identifier, which may result in data Inconsistent.
  • the original master node when the first node reads the root metadata identifier, the original master node needs to modify the root metadata identifier, then the root metadata identifier read by the first node and the root metadata actually stored in the storage device Data identification will be inconsistent, and the first node needs to rely on root metadata identification to read or write data, which will eventually lead to data loss or data inconsistency. If the first node reads the root metadata identifier and modifies the tenure identifier at the same time, since the root metadata identifier is bound to the tenure identifier, the original master node needs to determine the tenure identifier and write it by itself when modifying the root metadata identifier. The entry term is the same, otherwise the modification will not succeed.
  • the original master node will not be able to successfully modify the root metadata identifier.
  • the original master node can determine that there is a new master node. Then the original master node will stop modifying the root metadata identifier to avoid data conflicts and ensure data consistency.
  • the first node sets the root metadata to a read-only mode.
  • the root metadata is set to a read-only mode, that is, the root metadata will not be allowed to be modified.
  • the original master node when it does not perceive that there is a new master node, it still writes data to the storage device. As shown in Figure 4 above, the written data will be stored in a fixed-size file. If the file storage space can no longer support continued storage, a new file will be created for storage. At this time, the metadata needs to be modified, and relevant information such as the identification of the newly created file should be added to the metadata. Similarly, when the metadata The storage space of is also full. At this time, a new metadata needs to be created, and relevant information such as the identification of the newly created metadata needs to be added to the root metadata, and the first node has set the root metadata to read-only Therefore, the original master node will not be able to successfully modify the relevant information in the root metadata. At this time, the original master node will be able to determine that there is a new master node, and the original master node will abandon this operation and stop Write data to the storage device, use suicide and other methods to avoid data conflicts to ensure data consistency.
  • S304 The first node sets all metadata to a read-only mode.
  • the first node sets all the metadata to the read-only mode, that is, all the metadata is not allowed to be modified.
  • S305 The first node sets all user data to a read-only mode.
  • the first node sets all files storing user data to read-only mode, that is, all files are no longer allowed to write data.
  • the original master node needs to write data, it needs to write the data to the corresponding file, and the first node sets all files to read-only mode, resulting in the original master node.
  • the node cannot write data to the storage device, that is, the write fails.
  • the original master node can confirm that there is a new master node, and the original master node will abandon this operation and stop writing data to the storage device. Use suicide and other methods to avoid data conflicts to ensure data consistency.
  • the first node is set in a hierarchical manner, and the root metadata, metadata, and user data are set to read-only mode in turn, which can avoid data loss, ensure data consistency, and ensure that all files that have been written to the storage device can be accurately found. data.
  • S306 The first node updates the root metadata identifier and writes data to the storage device.
  • the first node starts to perform the function of the master node after setting all files storing user data to the read-only mode. If the first node needs to write user data, because the first node has set all files to read-only mode, the first node needs to rebuild a file to store the written data. Because the file is newly created, more metadata is needed , And then need to update the root metadata and root metadata identification, and other nodes can only access the data in the storage device, and cannot write data to ensure data consistency.
  • steps S301 to S306 involved in the foregoing method embodiments are only schematic descriptions and summaries, and should not constitute specific limitations. The involved steps can be added, reduced, or combined as needed.
  • FIG. 5 is a schematic structural diagram of a first node provided by an embodiment of the present application.
  • the first node may be the first node in the method embodiment described in FIG. 3, and may execute the method and steps in the method embodiment described in FIG. 3 where the first node is the execution subject.
  • the first node 500 includes a receiving module 510, an updating module 520, and a processing module 530. among them,
  • the receiving module 510 is configured to receive an upgrade message sent by a node management server, where the node management server is used to manage a node cluster, and the node cluster includes the first node;
  • the update module 520 is configured to update tenure management data, the tenure management data includes root metadata identification and tenure identification, the root metadata identification is used to determine root metadata, and the root metadata is used to manage the node cluster Corresponding metadata, where the tenure identifier is used to characterize that the first node is upgraded to the master node of the node cluster;
  • the processing module 530 is configured to set the data corresponding to the node cluster to a read-only mode, and the data includes the root metadata.
  • the update module 520 is further configured to read the root metadata identifier and update the tenure identifier at the same time.
  • the node cluster further includes a second node, the second node is used to read and write data corresponding to the node cluster and update the root metadata identifier; the update module 520 updates the tenure management After the data, the root metadata identifier is prohibited from being updated by the second node.
  • the data corresponding to the node cluster includes root metadata, metadata, and user data
  • the metadata is used to manage the user data
  • the user data is data written to the node cluster
  • the processing module 530 is further configured to set the metadata to the read-only mode after setting the root metadata to the read-only mode, and finally set the user data to the read-only mode.
  • processing module 530 is further configured to update the root metadata identifier and write data to the node cluster.
  • the receiving module 510 in the embodiment of the present application may be implemented by a transceiver or transceiver-related circuit components
  • the update module 520 and the processing module 530 may be implemented by a processor or processor-related circuit components.
  • each module in the first node may be added, reduced, or combined as needed.
  • the operation and/or function of each module in the first node is to realize the corresponding process of the method described in FIG. 3 above, and is not repeated here for brevity.
  • FIG. 6 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • the computing device 600 includes a processor 610, a communication interface 620, and a memory 630, and the processor 610, the communication interface 620, and the memory 630 are connected to each other through an internal bus 640.
  • the computing device 600 may be a computing device in cloud computing or a computing device in an edge environment.
  • the processor 610 may be composed of one or more general-purpose processors, such as a central processing unit (CPU), or a combination of a CPU and a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the above-mentioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (generic array logic, GAL), or any combination thereof.
  • the bus 640 may be a peripheral component interconnect standard (PCI) bus or an extended industry standard architecture (EISA) bus, etc.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus 640 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.
  • the memory 630 may include a volatile memory (volatile memory), such as a random access memory (random access memory, RAM); the memory 630 may also include a non-volatile memory (non-volatile memory), such as a read-only memory (read-only memory). Only memory, ROM, flash memory, hard disk drive (HDD), or solid-state drive (SSD); the memory 630 may also include a combination of the above types.
  • the memory 730 may be used to store programs and data, so that the processor 610 can call the program codes stored in the memory 630 to implement the aforementioned method for ensuring data consistency.
  • the program code may be used to implement the functional module of the first node shown in FIG. 5, or used to implement the method steps in the method embodiment shown in FIG. 3 with the first node as the execution subject.
  • the present application also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, it can implement any part of the method described in the above method embodiments. Or all steps.
  • the embodiment of the present invention also provides a computer program, which includes instructions, when the computer program is executed by a computer, the computer can execute part or all of the steps of any method for ensuring data consistency.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Procédé permettant d'assurer la cohérence de données et dispositif associé. Le procédé comprend les étapes suivantes : un premier nœud reçoit un message de mise à jour envoyé par un serveur de gestion de nœuds, le serveur de gestion de nœuds étant utilisé pour gérer un groupe de nœuds, et le groupe de nœuds comprenant le premier nœud ; le premier nœud met à jour les données de gestion de détention, les données de gestion de détention comprenant un identifiant de métadonnées racine et un identifiant de détention, l'identifiant de métadonnées racine étant utilisé pour déterminer des métadonnées racine, les métadonnées racine étant utilisées pour gérer les métadonnées correspondant au groupe de nœuds, et l'identifiant de détention étant utilisé pour signifier que le premier nœud est mis à jour vers un nœud maître du groupe de nœuds ; et le premier nœud règle les données correspondant au groupe de nœuds sur un mode de lecture seule, les données comprenant des métadonnées racine. Le procédé décrit permet d'assurer une cohérence de données et d'empêcher des conflits de données.
PCT/CN2020/096005 2019-10-31 2020-06-14 Procédé pour assurer la cohérence de données et dispositif associé WO2021082465A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911057345.X 2019-10-31
CN201911057345.XA CN112749178A (zh) 2019-10-31 2019-10-31 一种保证数据一致性的方法及相关设备

Publications (1)

Publication Number Publication Date
WO2021082465A1 true WO2021082465A1 (fr) 2021-05-06

Family

ID=75645771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096005 WO2021082465A1 (fr) 2019-10-31 2020-06-14 Procédé pour assurer la cohérence de données et dispositif associé

Country Status (2)

Country Link
CN (1) CN112749178A (fr)
WO (1) WO2021082465A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282334A (zh) * 2021-06-07 2021-08-20 深圳华锐金融技术股份有限公司 软件缺陷的恢复方法、装置、计算机设备和存储介质
CN113326251B (zh) * 2021-06-25 2024-02-23 深信服科技股份有限公司 数据管理方法、系统、设备和存储介质
CN113448649B (zh) * 2021-07-06 2023-07-14 聚好看科技股份有限公司 一种基于Redis的首页数据加载的服务器及方法
CN114844799A (zh) * 2022-05-27 2022-08-02 深信服科技股份有限公司 一种集群管理方法、装置、主机设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170272100A1 (en) * 2016-03-15 2017-09-21 Cloud Crowding Corp. Distributed Storage System Data Management And Security
CN109729129A (zh) * 2017-10-31 2019-05-07 华为技术有限公司 存储集群的配置修改方法、存储集群及计算机系统
CN110096237A (zh) * 2019-04-30 2019-08-06 北京百度网讯科技有限公司 副本处理方法及节点、存储系统、服务器、可读介质
CN110377577A (zh) * 2018-04-11 2019-10-25 北京嘀嘀无限科技发展有限公司 数据同步方法、装置、系统和计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170272100A1 (en) * 2016-03-15 2017-09-21 Cloud Crowding Corp. Distributed Storage System Data Management And Security
CN109729129A (zh) * 2017-10-31 2019-05-07 华为技术有限公司 存储集群的配置修改方法、存储集群及计算机系统
CN110377577A (zh) * 2018-04-11 2019-10-25 北京嘀嘀无限科技发展有限公司 数据同步方法、装置、系统和计算机可读存储介质
CN110096237A (zh) * 2019-04-30 2019-08-06 北京百度网讯科技有限公司 副本处理方法及节点、存储系统、服务器、可读介质

Also Published As

Publication number Publication date
CN112749178A (zh) 2021-05-04

Similar Documents

Publication Publication Date Title
WO2021082465A1 (fr) Procédé pour assurer la cohérence de données et dispositif associé
US11153380B2 (en) Continuous backup of data in a distributed data store
US11809726B2 (en) Distributed storage method and device
US11888599B2 (en) Scalable leadership election in a multi-processing computing environment
US10831614B2 (en) Visualizing restoration operation granularity for a database
US10579610B2 (en) Replicated database startup for common database storage
US9460185B2 (en) Storage device selection for database partition replicas
US20190188406A1 (en) Dynamic quorum membership changes
US9304815B1 (en) Dynamic replica failure detection and healing
KR101833114B1 (ko) 분산 데이터베이스 시스템들을 위한 고속 장애 복구
US9424140B1 (en) Providing data volume recovery access in a distributed data store to multiple recovery agents
US10382380B1 (en) Workload management service for first-in first-out queues for network-accessible queuing and messaging services
US20240053886A1 (en) File operations in a distributed storage system
US11080253B1 (en) Dynamic splitting of contentious index data pages
JP2007072975A (ja) ディスクへのトランザクション・データ書き込みの方式を動的に切り替える装置、切り替える方法、及び切り替えるプログラム
WO2021057108A1 (fr) Procédé de lecture de données, procédé d'écriture de données et serveur
WO2021004256A1 (fr) Procédé de commutation de nœud dans une panne de nœud et dispositif associé
US10223184B1 (en) Individual write quorums for a log-structured distributed storage system
US10785295B2 (en) Fabric encapsulated resilient storage
US10783134B2 (en) Polling process for monitoring interdependent hardware components
CN115168367B (zh) 一种大数据的数据配置方法和系统
WO2020207078A1 (fr) Procédé et dispositif de traitement de données et système de base de données distribuée
CN115599411A (zh) 服务节点更新方法、装置、电子设备及存储介质
CN116820430A (zh) 异步读写方法、装置、计算机设备及存储介质
CN115510167A (zh) 一种分布式数据库系统及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20883028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20883028

Country of ref document: EP

Kind code of ref document: A1