WO2013078611A1 - 分布式存储系统中的数据处理方法及设备、客户端 - Google Patents

分布式存储系统中的数据处理方法及设备、客户端 Download PDF

Info

Publication number
WO2013078611A1
WO2013078611A1 PCT/CN2011/083127 CN2011083127W WO2013078611A1 WO 2013078611 A1 WO2013078611 A1 WO 2013078611A1 CN 2011083127 W CN2011083127 W CN 2011083127W WO 2013078611 A1 WO2013078611 A1 WO 2013078611A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
hash value
storage nodes
storage
client
Prior art date
Application number
PCT/CN2011/083127
Other languages
English (en)
French (fr)
Inventor
杨德平
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201180003080.0A priority Critical patent/CN103229480B/zh
Priority to PCT/CN2011/083127 priority patent/WO2013078611A1/zh
Publication of WO2013078611A1 publication Critical patent/WO2013078611A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1061Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
    • H04L67/1065Discovery involving distributed pre-established resource-based relationships among peers, e.g. based on distributed hash tables [DHT] 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • the present invention relates to information storage technologies, and in particular, to a data processing method, device, and client in a distributed storage system. Background technique
  • each storage node is given a globally unique key (Key), which is described by DHT; key value formation of all storage nodes A closed, segmented space loop; each storage node is responsible for storing a data space that is a partition in the clockwise or counterclockwise direction of the key value of the storage node.
  • the client can query the partition where the key value is located and the corresponding primary storage node according to the key value of the data to be stored/read, and complete the storage/reading of the data.
  • a multi-copy storage strategy is adopted in a distributed storage system based on DHT overlay network, that is, one copy of data is stored in multiple copies, and each data copy is located at a different storage node. For example: After determining the primary storage node of the data, select the corresponding number of storage nodes in a clockwise or counterclockwise direction to store a copy of the data.
  • one storage node may store multiple copies of different data, resulting in a storage node joining or leaving the system, and the affected storage node may include the storage node in a clockwise direction.
  • the order of the two storage nodes and the counterclockwise order of the backup relationship of the two storage nodes increases the difficulty of system maintenance and scheduling.
  • the invention provides a data processing method, device and client in a distributed storage system, which are used to reduce the difficulty of system maintenance and scheduling.
  • a data processing method in a distributed storage system including:
  • the client obtains a hash value of the data according to the feature information of the data
  • the client writes the data to the at least two storage nodes separately.
  • Another aspect provides a client that includes:
  • a obtaining unit configured to obtain a hash value of the data according to the feature information of the data
  • a determining unit configured to determine, according to the pre-created partition table and the hash value of the data, a hash value corresponding to the data
  • a processing unit configured to separately write the data to the at least two storage nodes.
  • a data processing device in a distributed storage system including: a determining unit, configured to determine, according to the number of data copies in the distributed storage system, at least two storage nodes corresponding to a hash value, The number of the at least two storage nodes corresponding to the hash value is equal to the number of data copies in the distributed storage system, and the at least two storage nodes corresponding to the hash value correspond to different DHT overlapping networks.
  • the partitioning of the different DHT overlay networks is consistent; the creating unit is configured to create a partition table according to the hash value and the configuration information of the at least two storage nodes corresponding to the hash value, for the client to use the data according to the Feature information, obtaining a hash value of the data, and determining, according to the partition table and a hash value of the data, at least two storage nodes corresponding to the hash value of the data, and respectively separating the data Writing to the at least two storage nodes, the at least two storage nodes corresponding to different DHT overlay networks.
  • the client obtains a hash value of the data according to the feature information of the data, and determines a hash value with the data according to the pre-created partition table and the hash value of the data.
  • Corresponding at least two storage nodes wherein the at least two storage nodes correspond to different DHT overlay networks, so that the client can write the data to the at least two storage nodes respectively, because only the at least two storage nodes in the system only Storing a copy of the data, the backup relationship is simple, and avoiding the fact that a storage node joins or leaves the system due to the complexity of the backup relationship of the storage nodes in the system in the prior art, so that one storage node may store multiple copies of different data.
  • the affected storage node may include the storage node in the clockwise direction of the two storage nodes and the counterclockwise order of the backup relationship of the two storage nodes, thereby reducing the difficulty of system maintenance and scheduling.
  • FIG. 1 is a schematic flowchart of a data processing method in a distributed storage system according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a data processing method in a distributed storage system according to another embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a partition table involved in a data processing method in a distributed storage system provided by the embodiment corresponding to FIG. 1 and FIG. 2;
  • FIG. 4 is a distributed storage system involved in a data processing method in a distributed storage system provided by the embodiment corresponding to FIG. 1 and FIG. 2;
  • FIG. 5 is a schematic structural diagram of a client according to another embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a data processing device in a distributed storage system according to another embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a data processing device in a distributed storage system according to another embodiment of the present invention.
  • the technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention.
  • the embodiments are a part of the embodiments of the invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
  • FIG. 1 is a schematic flowchart of a data processing method in a distributed storage system according to an embodiment of the present invention. As shown in FIG. 1, the data processing method in the distributed storage system of this embodiment may include:
  • the client obtains a hash value of the foregoing data according to the feature information of the data.
  • the feature information may include, for example: a file name of the data, summary information of the data, or content of the data, any information related to the data and capable of identifying the data.
  • the embodiment may further include the step of creating the above partition table.
  • the at least two storage nodes corresponding to each hash value may be determined according to the number of data copies in the distributed storage system, where the number of at least two storage nodes corresponding to the hash value is equal to the distributed
  • the number of data copies in the storage system, the at least two storage nodes corresponding to the hash value correspond to different DHT overlay networks, and the partitions of the different DHT overlay networks are consistent; and then, according to the hash value and the hash value.
  • the configuration information of at least two storage nodes is created in the above partition table.
  • the partition table may be stored in the network, so that the client and each storage node corresponding to the different DHT overlay network share the partition table, or The partition table may also be directly distributed to the client, and each storage node corresponding to the different DHT overlay network, so that the client and each storage node corresponding to the different DHT overlay network obtain the partition table.
  • the storage node corresponding to the different DHT overlay network after the storage node corresponding to the different DHT overlay network obtains the partition table, the hash value corresponding to the configuration information of the storage node may be obtained according to the partition table, and the other corresponding to the hash value is determined.
  • the storage node, that is, the other storage node is the backup storage node of the storage node; then, the storage node corresponding to the different DHT overlay network can complete the initialization operation according to the data of the storage node and the data of the other storage nodes.
  • the configuration information of the storage node may include, but is not limited to, at least one of an IP address, a communication port, and a storage space.
  • the storage node can communicate with the backup storage node, and check whether the two are synchronized according to the data of the storage node and the data of the backup storage node. If the data is synchronized, the storage node completes the initialization operation, and if the data is not synchronized, The storage node performs data synchronization to complete the initialization operation.
  • the same hash value of different overlapping networks may correspond to storage nodes on different storage servers, to ensure the same Ha
  • the at least two storage nodes corresponding to the Greek values are physically isolated.
  • further isolation factors can be further considered, such as: rack isolation, network isolation or power isolation.
  • the at least two storage nodes may be located on different storage servers, thereby improving the reliability of data processing.
  • the client writes the foregoing data to the at least two storage nodes.
  • the DHT overlay network has the following characteristics: Each storage node is given a globally unique key (Key), which is described by DHT; the key values of all storage nodes form a closed, The space ring that is segmented; each storage node is responsible for storing the data space as a partition in the clockwise or counterclockwise direction of the key value of the storage node.
  • Key globally unique key
  • the client obtains a hash value of the data according to the feature information of the data, and determines at least two corresponding to the hash value of the data according to the pre-created partition table and the hash value of the data.
  • a storage node where the at least two storage nodes correspond to different DHT overlay networks, so that the client can write the data to the at least two storage nodes respectively, because only one copy of the data is stored on the at least two storage nodes in the system.
  • the backup relationship is simple, and it is possible to avoid a storage node that may be affected by a storage node joining or leaving the system due to the complexity of the backup relationship of the storage nodes in the system in the prior art, such that one storage node may store multiple copies of different data.
  • the storage node is clockwise in the order of two storage nodes and the counterclockwise order of the backup relationship of the two storage nodes, thereby reducing the difficulty of system maintenance and scheduling.
  • FIG. 2 is a schematic flowchart of a data processing method in a distributed storage system according to another embodiment of the present invention.
  • the data processing method in the system may further include:
  • the client obtains a hash value of the data according to the feature information of the data.
  • the client determines, according to the pre-created partition table and the hash value of the data, at least the hash value corresponding to the data.
  • the at least two storage nodes may be located on different storage servers, thereby increasing the reliability of data processing.
  • the client selects one of the at least two storage nodes and reads the storage node.
  • the above data written in the above storage node is selected.
  • the client obtains a hash value of the data according to the feature information of the data, and determines at least two corresponding to the hash value of the data according to the pre-created partition table and the hash value of the data.
  • a storage node wherein the at least two storage nodes correspond to different DHT overlay networks, so that the client can select one of the at least two storage nodes, and read the data written in the selected storage node, because Only one copy of the data is stored on the at least two storage nodes in the system, and the backup relationship is simple. It can avoid that a storage node may store multiple copies of different data due to the complicated backup relationship of the storage nodes in the system in the prior art.
  • the resulting storage node joining or leaving the system affected by the storage node may include the storage node clockwise sequence of two storage nodes and the counterclockwise order of the backup relationship between the two storage nodes, thereby reducing system maintenance and The difficulty of scheduling.
  • the at least two storage nodes corresponding to the hash value are determined according to the number of data copies 3 in the distributed storage system, where the number of the at least two storage nodes corresponding to the hash value is equal to the foregoing distribution.
  • the number of copies of the data in the storage system is three, and the at least two storage nodes corresponding to the hash value correspond to different DHT overlapping networks, that is, the DHT overlay network 1, the DHT overlay network 2, and the DHT overlay network 3, and the different DHT overlaps.
  • the partitions of the network are consistent. For example, the uniform partition may be used, the space is divided into N equal parts, or a non-uniform partition may be used, which is not limited in this embodiment. Then, according to the hash value and the hash value.
  • the configuration information of at least two storage nodes is used to create the above partition table, as shown in FIG.
  • the partition table created needs to be persisted, for example: Store the created partition table on the hard disk.
  • the partition table created above can be distributed to all storage nodes and distributed to all clients. After the storage node obtains the foregoing partition table, the initialization operation is performed. For details, refer to related content in the embodiment corresponding to FIG. 1, and details are not described herein again.
  • the client can determine the hash value corresponding to the data 1 according to the obtained pre-created partition table and the hash value of the data 1 to be written.
  • the three storage nodes for example: storage node 1, storage node 4 and storage node 7, three storage nodes corresponding to three different DHT overlay networks, namely DHT overlay network 1, DHT overlay network 2 and DHT overlay network 3.
  • storage node 1, storage section Point 4 and storage node 7 three storage nodes can be located on different storage servers, namely storage server 1, storage server 2 and storage server 3, as shown in Figure 4; finally, the client can write the above data 1 separately Storage node 1, storage node 4 and storage node 7 are three storage nodes.
  • the client can determine three storage nodes corresponding to the hash value of the data 1 according to the pre-created partition table and the hash value of the data 1 to be written, for example: storage node 1, storage node 4, and Storage node 7, three storage nodes corresponding to three different DHT overlay networks, namely DHT overlay network 1, DHT overlay network 2 and DHT overlay network 3; finally client can select storage node 1, storage node 4 and storage node 7 Any one of the three storage nodes stores the above-mentioned data 1 written in the selected storage node.
  • FIG. 5 is a schematic structural diagram of a client according to another embodiment of the present invention.
  • the client in this embodiment may include an obtaining unit 51, a determining unit 52, and a processing unit 53.
  • the obtaining unit 51 is configured to obtain a hash value of the data according to the feature information of the data.
  • the determining unit 52 is configured to determine, according to the pre-created partition table and the hash value of the data, a hash value corresponding to the data.
  • At least two storage nodes, the at least two storage nodes correspond to different DHT overlay networks; and the processing unit 53 is configured to separately write the foregoing data into the at least two storage nodes.
  • the functions of the client in the embodiment corresponding to the foregoing FIG. 1 and FIG. 2 can be implemented by the client provided in this embodiment.
  • the obtaining unit 51 is further configured to obtain a hash value of the data according to the feature information of the foregoing data;
  • the determining unit 52 is further configured to determine, according to the pre-created partition table and the hash value of the foregoing data, at least two storage nodes corresponding to the hash value of the foregoing data, where the at least two storage nodes correspond to different DHT overlapping networks. ;
  • the processing unit 53 is further configured to select one of the at least two storage nodes, and read the data written in the selected storage node.
  • the client obtains the hash value of the data according to the feature information of the data, and determines, by the determining unit, the hash value of the data according to the pre-created partition table and the hash value of the data.
  • the at least two storage nodes correspond to different DHT overlay networks, such that the upper processing unit can select one of the at least two storage nodes, and read the selected storage node to write The above data, or can also select one of the at least two storage nodes, and read the data written in the selected storage node, because only one data is stored on the at least two storage nodes in the system.
  • the copy has a simple backup relationship, which can avoid a storage node that is affected by a storage node joining or leaving the system due to the complexity of the backup relationship of the storage nodes in the system in the prior art, so that one storage node may store multiple copies of different data. May include the storage node clockwise Backup relationship problem of order in two storage nodes and counterclockwise two storage nodes, reducing the difficulty of system maintenance and scheduling and from.
  • FIG. 6 is a schematic structural diagram of a data processing device in a distributed storage system according to another embodiment of the present invention.
  • the data processing device in the distributed storage system of this embodiment may include a determining unit 61 and creating Unit 62.
  • the determining unit 61 determines at least two storage nodes corresponding to the hash value according to the number of data copies in the distributed storage system, where the number of the at least two storage nodes corresponding to the hash value is equal to the distributed
  • the number of data copies in the storage system, the at least two storage nodes corresponding to the hash value correspond to different DHT overlay networks, and the partitions of the different DHT overlay networks are consistent;
  • the creating unit 62 is configured according to the hash value and the hash value.
  • Corresponding configuration information of at least two storage nodes creating a partition table, for the client to obtain a hash value of the data according to the feature information of the data, and determining the data according to the partition table and the hash value of the data.
  • the at least two storage nodes corresponding to the hash value are respectively written into the at least two storage nodes, and the at least two storage nodes correspond to different DHT overlay networks.
  • the data processing device in the distributed storage system provided in this embodiment may further include a sending unit 71, configured to send the storage node corresponding to the client and the different DHT overlay network.
  • the determining unit determines at least two storage nodes corresponding to the hash value according to the number of data copies in the distributed storage system, and at least two corresponding to the hash value and the hash value by the creating unit according to the hash value.
  • the configuration information of the storage node is used to create a partition table, so that the client obtains the hash value of the data according to the characteristic information of the data, and determines the data with the data according to the pre-created partition table and the hash value of the data.
  • At least two storage nodes corresponding to the hash value wherein the at least two storage nodes correspond to different DHT overlay networks, so that the client can write the data to the at least two storage nodes respectively, because the at least two storage nodes in the system Only one copy of the data is stored on the file, and the backup relationship is simple. It can avoid joining or leaving a storage node caused by a storage node may store multiple copies of different data due to the complicated backup relationship of the storage nodes in the system in the prior art.
  • the storage node affected by the system may include the storage section The clockwise order of the two storage nodes and the counterclockwise sequence of the backup relationship between the two storage nodes reduces the difficulty of system maintenance and scheduling.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not executed.
  • the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical, mechanical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solution of the embodiment.
  • each functional unit in various embodiments of the present invention may be integrated in one processing unit. It is also possible that each unit physically exists alone, or two or more units may be integrated in one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the above software functional units are stored in a storage medium and include a number of instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform some of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a USB flash drive, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. Medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Storage Device Security (AREA)

Abstract

本发明提供一种分布式存储系统中的数据处理方法及设备、客户端,客户端根据数据的特征信息,获得数据的哈希值,通过根据预先创建的分区表和数据的哈希值,确定与数据的哈希值对应的至少两个存储节点,至少两个存储节点对应不同的DHT重叠网,使得客户端能够将数据分别写入至少两个存储节点,系统中至少两个存储节点上只存储一份数据副本,其备份关系简单,能避免现有技术中由于系统中存储节点的备份关系复杂使得一个存储节点可能会存储多份不同的数据副本而导致的一个存储节点加入或者离开系统受到影响的存储节点可能包括该存储节点顺时针方向的顺序两个存储节点和逆时针方向的顺序两个存储节点的备份关系的问题,降低了系统维护和调度的难度。

Description

分布式存储系统中的数据处理方法及设备、 客户端 技术领域 本发明实施例涉及信息存储技术, 尤其涉及一种分布式存储系统中的数 据处理方法及设备、 客户端。 背景技术
基于分布式哈希表( Distributed Hash Table, 简称 DHT )重叠网的分布 式存储系统中, 每个存储节点被赋予一个全局唯一的键值(Key ) , 由 DHT 描述; 所有存储节点的键值形成一个封闭的、 被切分了的空间环; 每个存储 节点负责存储的数据空间为在该存储节点的键值的顺时针或逆时针方向的一 个分区。 客户端可以根据待存储 /读取数据的键值, 在 DHT中查询到该键值 所在的分区和对应的主存储节点, 完成数据的存储 /读取。 为保证数据的高可 靠性,基于 DHT重叠网的分布式存储系统中采用多副本存储策略, 即一份数 据存储多个副本, 每个数据副本位于不同的存储节点。 例如: 在确定数据的 主存储节点后, 在顺时针或逆时针方向, 顺序选择相应数目的存储节点, 分 别存储一个数据副本。
然而, 由于上述系统中存储节点的备份关系复杂, 使得一个存储节点可 能会存储多份不同的数据副本, 导致了一个存储节点加入或者离开系统, 受 到影响的存储节点可能包括该存储节点顺时针方向的顺序两个存储节点和逆 时针方向的顺序两个存储节点的备份关系, 从而增加了系统维护和调度的难 度。 发明内容
本发明提供一种分布式存储系统中的数据处理方法及设备、 客户端, 用 以降低系统维护和调度的难度。
一方面提供了一种分布式存储系统中的数据处理方法, 包括:
客户端根据数据的特征信息, 获得所述数据的哈希值;
所述客户端根据预先创建的分区表和所述数据的哈希值, 确定与所述数 据的哈希值对应的至少两个存储节点, 所述至少两个存储节点对应不同的
DHT重叠网;
所述客户端将所述数据分别写入所述至少两个存储节点。
另一方面提供了一种客户端, 包括:
获得单元, 用于根据数据的特征信息, 获得所述数据的哈希值; 确定单元, 用于根据预先创建的分区表和所述数据的哈希值, 确定与所 述数据的哈希值对应的至少两个存储节点, 所述至少两个存储节点对应不同 的 DHT重叠网;
处理单元, 用于将所述数据分别写入所述至少两个存储节点。
另一方面提供了一种分布式存储系统中的数据处理设备, 包括: 确定单元, 用于根据所述分布式存储系统中的数据副本个数, 确定哈希 值对应的至少两个存储节点, 其中, 所述哈希值对应的至少两个存储节点的 个数等于所述分布式存储系统中的数据副本个数, 所述哈希值对应的至少两 个存储节点对应不同的 DHT重叠网, 所述不同的 DHT重叠网的分区一致; 创建单元, 用于根据所述哈希值和所述哈希值对应的至少两个存储节点 的配置信息, 创建分区表, 以供客户端根据数据的特征信息, 获得所述数据 的哈希值, 以及根据所述分区表和所述数据的哈希值, 确定与所述数据的哈 希值对应的至少两个存储节点, 并将所述数据分别写入所述至少两个存储节 点, 所述至少两个存储节点对应不同的 DHT重叠网。
由上述技术方案可知, 本发明实施例客户端根据数据的特征信息, 获得 上述数据的哈希值, 并通过根据预先创建的分区表和上述数据的哈希值, 确 定与上述数据的哈希值对应的至少两个存储节点, 上述至少两个存储节点对 应不同的 DHT重叠网,使得上述客户端能够将上述数据分别写入上述至少两 个存储节点, 由于系统中上述至少两个存储节点上只存储一份数据副本, 其 备份关系简单, 能够避免现有技术中由于系统中存储节点的备份关系复杂使 得一个存储节点可能会存储多份不同的数据副本而导致的一个存储节点加入 或者离开系统受到影响的存储节点可能包括该存储节点顺时针方向的顺序两 个存储节点和逆时针方向的顺序两个存储节点的备份关系的问题, 从而降低 了系统维护和调度的难度。 附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实 施例或现有技术描述中所需要使用的附图作一简单地介绍, 显而易见地, 下 面描述中的附图是本发明的一些实施例, 对于本领域普通技术人员来讲, 在 不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1 为本发明一实施例提供的分布式存储系统中的数据处理方法的流程 示意图;
图 2为本发明另一实施例提供的分布式存储系统中的数据处理方法的流 程示意图;
图 3为图 1和图 2对应的实施例提供的分布式存储系统中的数据处理方 法中涉及的分区表的示意图;
图 4为图 1和图 2对应的实施例提供的分布式存储系统中的数据处理方 法中涉及的分布式存储系统;
图 5为本发明另一实施例提供的客户端的结构示意图;
图 6为本发明另一实施例提供的分布式存储系统中的数据处理设备的结 构示意图;
图 7为本发明另一实施例提供的分布式存储系统中的数据处理设备的结 构示意图。 具体实施方式 为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本发 明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的实施例。 基于 本发明中的实施例, 本领域普通技术人员在没有作出创造性劳动前提下所获 得的所有其他实施例, 都属于本发明保护的范围。
图 1 为本发明一实施例提供的分布式存储系统中的数据处理方法的流程 示意图, 如图 1所示, 本实施例的分布式存储系统中的数据处理方法可以包 括:
101、 客户端根据数据的特征信息, 获得上述数据的哈希值; 特征信息可以包括以下内容, 例如: 数据的文件名字、 数据的摘要信息 或者数据的内容的等任何与该数据相关且能够标识该数据的信息。
在 101之前, 本实施例还可以进一步包括创建上述分区表的步骤。
例如: 可以根据上述分布式存储系统中的数据副本个数, 确定每一个哈 希值对应的至少两个存储节点, 其中, 上述哈希值对应的至少两个存储节点 的个数等于上述分布式存储系统中的数据副本个数, 上述哈希值对应的至少 两个存储节点对应不同的 DHT重叠网,上述不同的 DHT重叠网的分区一致; 然后, 根据上述哈希值和上述哈希值对应的至少两个存储节点的配置信息, 创建上述分区表。
需要说明的是: 本实施例中, 创建了上述分区表之后, 可以将该分区表 存储在网络中,以使得客户端和上述不同的 DHT重叠网对应的每一个存储节 点共享上述分区表, 或者还可以将该分区表直接分发至客户端, 以及上述不 同的 DHT重叠网对应的每一个存储节点, 以使得客户端和上述不同的 DHT 重叠网对应的每一个存储节点获得上述分区表。
本实施例中,上述不同的 DHT重叠网对应的存储节点获得上述分区表之 后, 可以根据上述分区表, 获得与本存储节点的配置信息对应的哈希值, 并 确定该哈希值对应的其他存储节点, 即其他存储节点为本存储节点的备份存 储节点; 然后, 上述不同的 DHT重叠网对应的存储节点则可以根据本存储节 点的数据和上述其他存储节点的数据, 完成初始化操作。
其中, 存储节点的配置信息可以包括但不限于 IP地址、 通信端口和存储 空间中的至少一项。
例如: 本存储节点可以与备份存储节点进行通信, 并根据本存储节点的 数据和备份存储节点的数据, 检查二者是否同步, 如果数据同步, 本存储节 点则完成初始化操作, 如果数据不同步, 本存储节点则进行数据同步, 以完 成初始化操作。
可选地, 本实施例中, 为了保证可靠性的要求, 在创建分区表的过程中, 不同重叠网的同一个哈希值可以对应不同的存储服务器上的存储节点, 用以 保证同一个哈希值对应的至少两个存储节点在物理上是隔离的。 在具体实现 时, 还可以进一步考虑更多的隔离因素, 例如: 机架隔离、 网络隔离或供电 隔离等。 102、上述客户端根据预先创建的分区表和上述数据的哈希值,确定与上 述数据的哈希值对应的至少两个存储节点, 上述至少两个存储节点对应不同 的 DHT重叠网;
可选地,在 102中,上述至少两个存储节点可以位于不同存储服务器上, 从而提高了数据处理的可靠性。
103、 上述客户端将上述数据分别写入上述至少两个存储节点。
需要说明的是, 在本发明实施例中, DHT重叠网具备以下特性: 每个存 储节点被赋予一个全局唯一的键值(Key ) , 由 DHT描述; 所有存储节点的 键值形成一个封闭的、 被切分了的空间环; 每个存储节点负责存储的数据空 间为在该存储节点的键值的顺时针或逆时针方向的一个分区。
本实施例中, 客户端根据数据的特征信息, 获得上述数据的哈希值, 并 通过根据预先创建的分区表和上述数据的哈希值, 确定与上述数据的哈希值 对应的至少两个存储节点, 上述至少两个存储节点对应不同的 DHT重叠网, 使得上述客户端能够将上述数据分别写入上述至少两个存储节点, 由于系统 中上述至少两个存储节点上只存储一份数据副本, 其备份关系简单, 能够避 免现有技术中由于系统中存储节点的备份关系复杂使得一个存储节点可能会 存储多份不同的数据副本而导致的一个存储节点加入或者离开系统受到影响 的存储节点可能包括该存储节点顺时针方向的顺序两个存储节点和逆时针方 向的顺序两个存储节点的备份关系的问题, 从而降低了系统维护和调度的难 度。
图 2为本发明另一实施例提供的分布式存储系统中的数据处理方法的流 程示意图, 在上述图 1对应的实施例的基础之上, 如图 2所示, 本实施例的 分布式存储系统中的数据处理方法还可以进一步包括:
201、 上述客户端根据上述数据的特征信息, 获得上述数据的哈希值; 202、上述客户端根据预先创建的分区表和上述数据的哈希值,确定与上 述数据的哈希值对应的至少两个存储节点, 上述至少两个存储节点对应不同 的 DHT重叠网;
类似地,在 202中,上述至少两个存储节点可以位于不同存储服务器上, 从而提高了数据处理的可靠性。
203、上述客户端选择上述至少两个存储节点中的一个存储节点,并读取 选择的上述存储节点中写入的上述数据。
本实施例中, 客户端根据数据的特征信息, 获得上述数据的哈希值, 并 通过根据预先创建的分区表和上述数据的哈希值, 确定与上述数据的哈希值 对应的至少两个存储节点, 上述至少两个存储节点对应不同的 DHT重叠网, 使得上述客户端能够选择上述至少两个存储节点中的一个存储节点, 并读取 选择的上述存储节点中写入的上述数据, 由于系统中上述至少两个存储节点 上只存储一份数据副本, 其备份关系简单, 能够避免现有技术中由于系统中 存储节点的备份关系复杂使得一个存储节点可能会存储多份不同的数据副本 而导致的一个存储节点加入或者离开系统受到影响的存储节点可能包括该存 储节点顺时针方向的顺序两个存储节点和逆时针方向的顺序两个存储节点的 备份关系的问题, 从而降低了系统维护和调度的难度。
为使得本发明实施例提供的方法更加清楚, 下面将以三个数据副本的分 布式存储系统作为举例。
举例来说, 根据上述分布式存储系统中的数据副本个数 3, 确定哈希值 对应的至少两个存储节点, 其中, 上述哈希值对应的至少两个存储节点的个 数 3等于上述分布式存储系统中的数据副本个数 3, 上述哈希值对应的至少 两个存储节点对应不同的 DHT重叠网, 即 DHT重叠网 1、 DHT重叠网 2和 DHT重叠网 3, 上述不同的 DHT重叠网的分区一致, 例如: 可以采用均匀分 区, 空间被分成 N等分, 或者还可以采用非均匀分区, 本实施例对此不进行 限定; 然后, 根据上述哈希值和上述哈希值对应的至少两个存储节点的配置 信息, 创建上述分区表, 如图 3所示。 所创建的分区表需要持久化保存, 例 如: 将所创建的分区表存储在硬盘中。
上述创建的分区表可以分发到所有的存储节点, 以及分发到所有的客户 端。 存储节点获得上述分区表之后, 进行初始化操作, 详细内容可以参见图 1对应的实施例中的相关内容, 此处不再赘述。
假设数据 1需要存储到上述三个数据副本的分布式存储系统, 客户端则 可以根据获得的预先创建的分区表和待写入的数据 1 的哈希值, 确定与数据 1的哈希值对应的三个存储节点, 例如: 存储节点 1、存储节点 4和存储节点 7, 三个存储节点对应三个不同的 DHT重叠网, 即 DHT重叠网 1、 DHT重 叠网 2和 DHT重叠网 3。 为了提高数据处理的可靠性, 存储节点 1、 存储节 点 4和存储节点 7三个存储节点可以位于不同存储服务器, 即存储服务器 1、 存储服务器 2和存储服务器 3, 上, 如图 4所示; 最后, 客户端则可以将上 述数据 1分别写入存储节点 1、 存储节点 4和存储节点 7三个存储节点。
类似地, 客户端则可以根据预先创建的分区表和待写入的数据 1 的哈希 值, 确定与数据 1 的哈希值对应的三个存储节点, 例如: 存储节点 1、 存储 节点 4和存储节点 7, 三个存储节点对应三个不同的 DHT重叠网, 即 DHT 重叠网 1、 DHT重叠网 2和 DHT重叠网 3; 最后客户端则可以选择存储节点 1、存储节点 4和存储节点 7三个存储节点中的任意一个存储节点,并读取选 择的存储节点中写入的上述数据 1。
需要说明的是: 对于前述的各方法实施例, 为了简单描述, 故将其都表 述为一系列的动作组合, 但是本领域技术人员应该知悉, 本发明并不受所描 述的动作顺序的限制, 因为依据本发明, 某些步骤可以采用其他顺序或者同 时进行。 其次, 本领域技术人员也应该知悉, 说明书中所描述的实施例均属 于优选实施例, 所涉及的动作和模块并不一定是本发明所必须的。
在上述实施例中, 对各个实施例的描述都各有侧重, 某个实施例中没有 详述的部分, 可以参见其他实施例的相关描述。
图 5为本发明另一实施例提供的客户端的结构示意图, 如图 5所示, 本 实施例的客户端可以包括获得单元 51、 确定单元 52和处理单元 53。 其中, 获得单元 51 用于根据数据的特征信息, 获得上述数据的哈希值; 确定单元 52用于根据预先创建的分区表和上述数据的哈希值, 确定与上述数据的哈希 值对应的至少两个存储节点, 上述至少两个存储节点对应不同的 DHT 重叠 网; 处理单元 53用于将上述数据分别写入上述至少两个存储节点。
上述图 1和图 2对应的实施例中客户端的功能均可以由本实施例提供的 客户端实现。
进一步地, 本实施例中,
获得单元 51还可以进一步用于根据上述数据的特征信息,获得上述数据 的哈希值;
确定单元 52 还可以进一步用于根据预先创建的分区表和上述数据的哈 希值, 确定与上述数据的哈希值对应的至少两个存储节点, 上述至少两个存 储节点对应不同的 DHT重叠网; 处理单元 53还可以进一步用于选择上述至少两个存储节点中的一个存 储节点, 并读取选择的上述存储节点中写入的上述数据。
本实施例中, 客户端通过获得单元根据数据的特征信息, 获得上述数据 的哈希值, 并通过确定单元根据预先创建的分区表和上述数据的哈希值, 确 定与上述数据的哈希值对应的至少两个存储节点, 上述至少两个存储节点对 应不同的 DHT重叠网,使得上处理单元能够选择上述至少两个存储节点中的 一个存储节点, 并读取选择的上述存储节点中写入的上述数据, 或者还能够 选择上述至少两个存储节点中的一个存储节点 , 并读取选择的上述存储节点 中写入的上述数据, 由于系统中上述至少两个存储节点上只存储一份数据副 本, 其备份关系简单, 能够避免现有技术中由于系统中存储节点的备份关系 复杂使得一个存储节点可能会存储多份不同的数据副本而导致的一个存储节 点加入或者离开系统受到影响的存储节点可能包括该存储节点顺时针方向的 顺序两个存储节点和逆时针方向的顺序两个存储节点的备份关系的问题, 从 而降低了系统维护和调度的难度。
图 6为本发明另一实施例提供的分布式存储系统中的数据处理设备的结 构示意图, 如图 6所示, 本实施例的分布式存储系统中的数据处理设备可以 包括确定单元 61和创建单元 62。其中, 确定单元 61根据上述分布式存储系 统中的数据副本个数, 确定哈希值对应的至少两个存储节点, 其中, 上述哈 希值对应的至少两个存储节点的个数等于上述分布式存储系统中的数据副本 个数, 上述哈希值对应的至少两个存储节点对应不同的 DHT重叠网, 上述不 同的 DHT重叠网的分区一致; 创建单元 62根据上述哈希值和上述哈希值对 应的至少两个存储节点的配置信息, 创建分区表, 以供客户端根据数据的特 征信息, 获得上述数据的哈希值, 以及根据上述分区表和上述数据的哈希值, 确定与上述数据的哈希值对应的至少两个存储节点, 并将上述数据分别写入 上述至少两个存储节点, 上述至少两个存储节点对应不同的 DHT重叠网。
可选地, 如图 7所示, 本实施例提供的分布式存储系统中的数据处理设 备还可以进一步包括发送单元 71 ,用于可以向上述客户端和上述不同的 DHT 重叠网对应的存储节点发送上述分区表,以使得上述不同的 DHT重叠网对应 的存储节点根据上述分区表, 获得与本存储节点的配置信息对应的哈希值 , 并确定该哈希值对应的其他存储节点, 以及根据本存储节点的数据和上述其 他存储节点的数据, 完成初始化操作。 同存储服务器上, 从而提高了数据处理的可靠性。
本实施例中, 通过确定单元根据分布式存储系统中的数据副本个数, 确 定哈希值对应的至少两个存储节点, 并由创建单元根据上述哈希值和上述哈 希值对应的至少两个存储节点的配置信息, 创建分区表, 使得客户端根据数 据的特征信息, 获得上述数据的哈希值, 并通过根据预先创建的分区表和上 述数据的哈希值, 确定与上述数据的哈希值对应的至少两个存储节点, 上述 至少两个存储节点对应不同的 DHT重叠网,使得上述客户端能够将上述数据 分别写入上述至少两个存储节点, 由于系统中上述至少两个存储节点上只存 储一份数据副本, 其备份关系简单, 能够避免现有技术中由于系统中存储节 点的备份关系复杂使得一个存储节点可能会存储多份不同的数据副本而导致 的一个存储节点加入或者离开系统受到影响的存储节点可能包括该存储节点 顺时针方向的顺序两个存储节点和逆时针方向的顺序两个存储节点的备份关 系的问题, 从而降低了系统维护和调度的难度。
所属领域的技术人员可以清楚地了解到, 为描述的方便和简洁, 上述描 述的系统, 装置和单元的具体工作过程, 可以参考前述方法实施例中的对应 过程, 在此不再赘述。
在本申请所提供的几个实施例中, 应该理解到, 所揭露的系统, 装置和 方法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示 意性的, 例如, 所述单元的划分, 仅仅为一种逻辑功能划分, 实际实现时可 以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成到另一个 系统, 或一些特征可以忽略, 或不执行。 另一点, 所显示或讨论的相互之间 的耦合或直接耦合或通信连接可以是通过一些接口, 装置或单元的间接耦合 或通信连接, 可以是电性, 机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的, 作 为单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可以分布到多个网络单元上。 可以根据实际的需要选择其中的部分或 者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中, 也可以是各个单元单独物理存在, 也可以两个或两个以上单元集成在一个单 元中。 上述集成的单元既可以采用硬件的形式实现, 也可以采用硬件加软件 功能单元的形式实现。
上述以软件功能单元的形式实现的集成的单元, 可以存储在一个计算机 可读取存储介质中。 上述软件功能单元存储在一个存储介质中, 包括若干指 令用以使得一台计算机设备(可以是个人计算机, 服务器, 或者网络设备等) 执行本发明各个实施例所述方法的部分步骤。 而前述的存储介质包括: U盘、 移动硬盘、 只读存储器(Read-Only Memory, 简称 ROM ) 、 随机存取存储 器( Random Access Memory, 简称 RAM )、 磁碟或者光盘等各种可以存储 程序代码的介质。
最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对其 限制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通技术 人员应当理解: 其依然可以对前述各实施例所记载的技术方案进行修改, 或 者对其中部分技术特征进行等同替换; 而这些修改或者替换, 并不使相应技 术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims

权 利 要求 书
1、 一种分布式存储系统中的数据处理方法, 其特征在于, 包括: 客户端根据数据的特征信息, 获得所述数据的哈希值;
所述客户端根据预先创建的分区表和所述数据的哈希值, 确定与所述数 据的哈希值对应的至少两个存储节点, 所述至少两个存储节点对应不同的 DHT重叠网;
所述客户端将所述数据分别写入所述至少两个存储节点。
2、 根据权利要求 1所述的方法, 其特征在于, 所述客户端根据数据的特 征信息, 获得所述数据的哈希值之前, 还包括:
根据所述分布式存储系统中的数据副本个数, 确定所述哈希值对应的至 少两个存储节点, 其中, 所述哈希值对应的至少两个存储节点的个数等于所 述分布式存储系统中的数据副本个数, 所述哈希值对应的至少两个存储节点 对应不同的 DHT重叠网, 所述不同的 DHT重叠网的分区一致;
根据所述哈希值和所述哈希值对应的至少两个存储节点的配置信息, 创 建所述分区表。
3、 根据权利要求 2所述的方法, 其特征在于, 所述方法还包括: 所述不同的 DHT重叠网对应的存储节点获得所述分区表;
所述不同的 DHT重叠网对应的存储节点根据所述分区表,获得与本存储 节点的配置信息对应的哈希值, 并确定该哈希值对应的其他存储节点;
所述不同的 DHT 重叠网对应的存储节点根据本存储节点的数据和所述 其他存储节点的数据, 完成初始化操作。
4、 根据权利要求 1~3任一权利要求所述的方法, 其特征在于, 所述客 户端将所述数据分别写入所述至少两个存储节点之后, 还包括:
所述客户端根据所述数据的特征信息, 获得所述数据的哈希值; 所述客户端根据预先创建的分区表和所述数据的哈希值, 确定与所述数 据的哈希值对应的至少两个存储节点, 所述至少两个存储节点对应不同的 DHT重叠网;
所述客户端选择所述至少两个存储节点中的一个存储节点, 并读取选择 的所述存储节点中写入的所述数据。
5、 根据权利要求 1~4任一权利要求所述的方法, 其特征在于, 所述至少两个存储节点位于不同存储服务器上。
6、 一种客户端, 其特征在于, 包括:
获得单元, 用于根据数据的特征信息, 获得所述数据的哈希值; 确定单元, 用于根据预先创建的分区表和所述数据的哈希值, 确定与所 述数据的哈希值对应的至少两个存储节点, 所述至少两个存储节点对应不同 的 DHT重叠网;
处理单元, 用于将所述数据分别写入所述至少两个存储节点。
7、 根据权利要求 6所述的客户端, 其特征在于,
获得单元还用于根据所述数据的特征信息, 获得所述数据的哈希值; 确定单元还用于根据预先创建的分区表和所述数据的哈希值, 确定与所 述数据的哈希值对应的至少两个存储节点, 所述至少两个存储节点对应不同 的 DHT重叠网;
处理单元还用于选择所述至少两个存储节点中的一个存储节点, 并读取 选择的所述存储节点中写入的所述数据。
8、 一种分布式存储系统中的数据处理设备, 其特征在于, 包括: 确定单元, 用于根据所述分布式存储系统中的数据副本个数, 确定哈希 值对应的至少两个存储节点, 其中, 所述哈希值对应的至少两个存储节点的 个数等于所述分布式存储系统中的数据副本个数, 所述哈希值对应的至少两 个存储节点对应不同的 DHT重叠网, 所述不同的 DHT重叠网的分区一致; 创建单元, 用于根据所述哈希值和所述哈希值对应的至少两个存储节点 的配置信息, 创建分区表, 以供客户端根据数据的特征信息, 获得所述数据 的哈希值, 以及根据所述分区表和所述数据的哈希值, 确定与所述数据的哈 希值对应的至少两个存储节点, 并将所述数据分别写入所述至少两个存储节 点, 所述至少两个存储节点对应不同的 DHT重叠网。
9、根据权利要求 8所述的设备,其特征在于,所述设备还包括发送单元, 用于向所述客户端和所述不同的 DHT 重叠网对应的存储节点发送所述分区 表, 以使得
所述不同的 DHT重叠网对应的存储节点根据所述分区表,获得与本存储 节点的配置信息对应的哈希值, 并确定该哈希值对应的其他存储节点, 以及 根据本存储节点的数据和所述其他存储节点的数据, 完成初始化操作。
10、 根据权利要求 8或 9所述的设备, 其特征在于, 所述确定单元确定 的哈希值对应的至少两个存储节点位于不同存储服务器上。
PCT/CN2011/083127 2011-11-29 2011-11-29 分布式存储系统中的数据处理方法及设备、客户端 WO2013078611A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201180003080.0A CN103229480B (zh) 2011-11-29 2011-11-29 分布式存储系统中的数据处理方法及设备、客户端
PCT/CN2011/083127 WO2013078611A1 (zh) 2011-11-29 2011-11-29 分布式存储系统中的数据处理方法及设备、客户端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2011/083127 WO2013078611A1 (zh) 2011-11-29 2011-11-29 分布式存储系统中的数据处理方法及设备、客户端

Publications (1)

Publication Number Publication Date
WO2013078611A1 true WO2013078611A1 (zh) 2013-06-06

Family

ID=48534598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/083127 WO2013078611A1 (zh) 2011-11-29 2011-11-29 分布式存储系统中的数据处理方法及设备、客户端

Country Status (2)

Country Link
CN (1) CN103229480B (zh)
WO (1) WO2013078611A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468580A (zh) * 2014-12-10 2015-03-25 北京众享比特科技有限公司 适用于分布式存储的认证方法
CN106897344A (zh) * 2016-07-21 2017-06-27 阿里巴巴集团控股有限公司 分布式数据库的数据操作请求处理方法及装置
CN109271391A (zh) * 2018-09-29 2019-01-25 武汉极意网络科技有限公司 数据存储方法、服务器、存储介质及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461777B (zh) * 2014-11-26 2018-07-13 华为技术有限公司 一种存储阵列中数据镜像方法及存储阵列
CN106980645B (zh) * 2017-02-24 2020-09-15 北京同有飞骥科技股份有限公司 一种分布式文件系统架构实现方法和装置
CN110377611B (zh) * 2019-07-12 2022-07-15 北京三快在线科技有限公司 积分排名的方法及装置
CN111030930B (zh) * 2019-12-02 2022-02-01 北京众享比特科技有限公司 基于去中心化网络数据分片传输方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150489A (zh) * 2007-10-19 2008-03-26 四川长虹电器股份有限公司 基于分布式哈希表的资源共享方法
WO2008103568A1 (en) * 2007-02-20 2008-08-28 Nec Laboratories America, Inc. Method and apparatus for storing data in a peer to peer network
CN101378325A (zh) * 2007-08-31 2009-03-04 华为技术有限公司 一种重叠网络及其构建方法
CN102004797A (zh) * 2010-12-24 2011-04-06 深圳市同洲电子股份有限公司 一种数据处理方法、装置和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008103568A1 (en) * 2007-02-20 2008-08-28 Nec Laboratories America, Inc. Method and apparatus for storing data in a peer to peer network
CN101378325A (zh) * 2007-08-31 2009-03-04 华为技术有限公司 一种重叠网络及其构建方法
CN101150489A (zh) * 2007-10-19 2008-03-26 四川长虹电器股份有限公司 基于分布式哈希表的资源共享方法
CN102004797A (zh) * 2010-12-24 2011-04-06 深圳市同洲电子股份有限公司 一种数据处理方法、装置和系统

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468580A (zh) * 2014-12-10 2015-03-25 北京众享比特科技有限公司 适用于分布式存储的认证方法
CN104468580B (zh) * 2014-12-10 2017-08-11 北京众享比特科技有限公司 适用于分布式存储的认证方法
CN106897344A (zh) * 2016-07-21 2017-06-27 阿里巴巴集团控股有限公司 分布式数据库的数据操作请求处理方法及装置
CN109271391A (zh) * 2018-09-29 2019-01-25 武汉极意网络科技有限公司 数据存储方法、服务器、存储介质及装置
CN109271391B (zh) * 2018-09-29 2021-05-28 武汉极意网络科技有限公司 数据存储方法、服务器、存储介质及装置

Also Published As

Publication number Publication date
CN103229480A (zh) 2013-07-31
CN103229480B (zh) 2017-10-17

Similar Documents

Publication Publication Date Title
US10185497B2 (en) Cluster federation and trust in a cloud environment
WO2013078611A1 (zh) 分布式存储系统中的数据处理方法及设备、客户端
US9405781B2 (en) Virtual multi-cluster clouds
US10291696B2 (en) Peer-to-peer architecture for processing big data
US9934242B2 (en) Replication of data between mirrored data sites
WO2020001011A1 (zh) 一种区块链的节点同步方法及装置
CN105144105B (zh) 用于可扩展的崩溃一致的快照操作的系统和方法
AU2016238870B2 (en) Fault-tolerant key management system
KR20120018178A (ko) 객체 저장부들의 네트워크상의 스웜-기반의 동기화
WO2015085530A1 (zh) 数据复制方法及存储系统
US20120166403A1 (en) Distributed storage system having content-based deduplication function and object storing method
US9031906B2 (en) Method of managing data in asymmetric cluster file system
WO2016101718A1 (zh) 数据补全方法和装置
WO2014023000A1 (zh) 分布式数据处理方法及装置
WO2016146011A1 (zh) 一种创建虚拟非易失性存储介质的方法、系统及管理系统
WO2015157904A1 (zh) 一种文件同步方法、服务器及终端
JPWO2010116608A1 (ja) データ挿入システム
WO2014063474A1 (zh) 数据库扩展方法、数据库扩展装置和数据库系统
US10248659B2 (en) Consistent hashing configurations supporting multi-site replication
CN116389233B (zh) 容器云管理平台主备切换系统、方法、装置和计算机设备
CN106155573B (zh) 用于存储设备扩展的方法、装置以及扩展的存储设备
US9798633B2 (en) Access point controller failover system
CN110389984B (zh) 用于基于池伙伴的复制的装置和方法
JP2019536167A (ja) 分散ストレージ・エリア・ネットワーク環境における論理ユニット番号へのアクセスを動的に管理する方法とその装置
TW202103009A (zh) 用於擴充硬碟擴充單元的叢集式儲存系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11876639

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11876639

Country of ref document: EP

Kind code of ref document: A1