WO2010015143A1 - 一种分布式文件系统及其数据块一致性管理的方法 - Google Patents

一种分布式文件系统及其数据块一致性管理的方法 Download PDF

Info

Publication number
WO2010015143A1
WO2010015143A1 PCT/CN2009/000855 CN2009000855W WO2010015143A1 WO 2010015143 A1 WO2010015143 A1 WO 2010015143A1 CN 2009000855 W CN2009000855 W CN 2009000855W WO 2010015143 A1 WO2010015143 A1 WO 2010015143A1
Authority
WO
WIPO (PCT)
Prior art keywords
data block
file
counter
value
file access
Prior art date
Application number
PCT/CN2009/000855
Other languages
English (en)
French (fr)
Inventor
杜守富
王瑞丰
程剑
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP09804443A priority Critical patent/EP2330519A4/en
Priority to US13/057,187 priority patent/US8285689B2/en
Publication of WO2010015143A1 publication Critical patent/WO2010015143A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2048Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/82Solving problems relating to consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/88Monitoring involving counting

Definitions

  • the invention relates to a large-capacity storage distributed file system and a management method thereof in the field of computer application, in particular to a large-scale distributed data processing file system and a redundant backup data block consistency check and backup management thereof Methods. Background technique
  • a large-scale distributed data processing file system is generally designed as a centralized management server for metadata (for example: file location registration server, FLR, File) in order to ensure efficient data processing and centralized management of metadata.
  • metadata for example: file location registration server, FLR, File
  • Location Register and several other data files store the architecture of the server (eg File Access Server, FAS, File Access Service).
  • the file access client (FAC, File Access Client) must first ask the FLR for the specific storage location of the data, and then the FAC initiates a read and write data request to the specific FAS.
  • the way FAS manages data files is to divide the file data into blocks of one piece (CHUNK), each of which consists of several blocks of data.
  • CHUNK blocks of one piece
  • the correspondence between the data block and the file is identified by the unified identifier FILEID.
  • Each file has a FILEID different from the other files.
  • the identifier of each data block CHUNK is the FILEID + CHUNK number. All CHUNK distribution information of the file is managed by the FLR unified database.
  • the data blocks are redundantly backed up, that is, the same data block copy backup exists on multiple FASs.
  • multiple copies of the data blocks thus backed up in the prior art are difficult to ensure consistency, which is a serious problem, mainly in the following cases: How to ensure that multiple FASs are simultaneously in the process of writing operations Write backup data corresponding to each other; if there is an FAS abnormality or damage, how can the data on the FAS be reconstructed; if the FLR is abnormal during the writing process, how to ensure that the FLR record is consistent with the FAS Sex.
  • a method for data block consistency management in a distributed file system comprising the following steps:
  • the value of the corresponding counter is generated by the file location registration server, and is stored on the file access server and the file location registration server;
  • the file access client When writing data to a data block, the file access client simultaneously writes data to the primary and secondary file access servers. If the data is successfully written, the value of the counter is not modified; otherwise, the file access server is normal for writing data.
  • the value of the data block counter in is incremented by a predetermined step size;
  • the file location registration server reconstructs the abnormal data block according to the value of the corresponding data block counter reported by the primary and backup file access servers, and the data block of the maximum counter value is normal and valid.
  • the method further includes:
  • the file location registration server returns the primary and secondary file access server information of the data block to the file access client, and the file access client initiates the file access server to the primary and secondary file access servers. Modify the data operation;
  • the modification of the data block counter value is not initiated; otherwise, the value of the counter of the file access server where the data block of the normal change data is located is increased by a predetermined step size, and at a predetermined step size. The value of the counter of the data block on the file location registration server is incremented.
  • the predetermined step size is 1.
  • the step C further includes: the file location registration server initiating a data block verification request to the file access server at a startup time and at a certain time interval.
  • the method further includes a data block verification process, where the data block is verified
  • the process includes:
  • the file access server reports all the local data block identifiers to the file location registration server, and the file location registration server firstly receives the data block identifiers into a HASH table, and the subsequently received data block identifiers are in the HASH table. Find in the middle, find success, indicating that it is a set of primary and backup data blocks;
  • the file location registration server divides the primary and secondary file access servers storing the same data block copy backup into a group, and divides all file access servers in the system into groups.
  • the step of performing the data block corresponding to each group of data block identifiers in the step D2 in the step D2 further includes:
  • step D21 checking whether the data block has a record in the file location registration server; if not, directly delete; otherwise, proceeds to step D22;
  • step D2 further includes:
  • the record of the data block is deleted in the file location registration server database.
  • step D2 further includes:
  • the file location registration server initiates a data block reconstruction request from the other file access server where the data block having the smaller counter value is located, from the valid data block. Copy to the abnormal data block;
  • the value of the counter for modifying the corresponding data block on each file access server is the same as the maximum value.
  • step D2 further includes:
  • the value of the counter of the data block on the file location registration server is greater than the file access If the server is small, the value of the counter of the corresponding data block in the file location registration server database is synchronously modified.
  • a distributed file system of the method comprising: a file access server connected through a network, at least one file location registration server; the file access server is correspondingly connected to a database; the user accesses the file through a file access client
  • the server and the file location registration server perform a write data request, and increase the value of the data block counter of the file server with normal write data in a predetermined step size; wherein the file access server is configured with at least a primary and backup file access server;
  • the file location registration server is configured to generate a value of a counter corresponding to the data block, and control reconstruction of the abnormal data block according to the value of the corresponding data block counter reported by the primary and backup file access servers.
  • the invention provides a distributed file system and a method for data block consistency management thereof. Because the data block counter is used, each data block is recorded for abnormality and needs to be reconstructed, in a massive cluster system. The redundantly backed up data blocks can be managed simply and efficiently, maintaining consistency between them, and the abnormal backup data blocks can be reconstructed, which is simple and accurate. BRIEF abstract
  • FIG. 1 is a schematic flow chart of a modification of a data block counter when a method of the present invention writes or changes data
  • FIG. 2 is a flowchart of a file location registration server FLR check receiving file access server FAS reporting data block CHUNK of the method of the present invention
  • FIG. 3 is a schematic flow chart of a FLR specific verification method of a file location registration server according to the method of the present invention
  • FIG. 4 is a schematic structural diagram of a distributed file system according to the present invention. Preferred embodiment of the invention
  • the invention discloses a distributed file system and a method for data block consistency management thereof, and proposes a concept of a data block counter, that is, a CHUNK counter, which is given a count for each data block. , indicating the number of times the data block has been modified.
  • a CHUNK counter which is given a count for each data block.
  • the value of the counter is increased by a predetermined step size, so that if the values of the counters of the primary and secondary data blocks are inconsistent, it means that there is an invalid data block, and the abnormal data block can be reconstructed accordingly.
  • the method of the invention satisfactorily solves the management work of the primary and secondary data blocks, and the main implementation contents thereof include:
  • a data block When a data block is generated, it is uniformly generated by the file location registration server FLR, and the first created data block has a counter value of 1. This value is stored on both the file access server FAS and the file location registration server FLR.
  • the FAC simultaneously writes two pieces of data to the primary and secondary FAS. If the primary and backup FAS write data is successful, the modification process of the CHUNK counter is not initiated. If a FAS writes an abnormality during the writing process, the FAC initiates a counter modification process to the normal FAS to modify the value of the current CHUNK counter of the normal data block, so that the value of the CHUNK counter of the primary and backup FAS of the data block appears. There is an inconsistency, and the value of the CHUNK counter of the normal data block is high. In the later stage, an abnormal data block can be determined by simple verification, and the data block is reconstructed on the abnormal FAS.
  • the FLR will return the information of the two FASs corresponding to the data block to the FAC, and the FAC directly initiates the operation of modifying the data to the two FASs. If both the primary and backup FAS modification data are successful, the CHUNK counter modification process is not initiated. If a FAS is found to be abnormal during the writing process, the FAC initiates a CHUNK counter modification process to the normal FAS, so that the value of the corresponding CHUNK counter on the FAS is increased by a predetermined step size, and the value of the CHUNK counter on the FLR is increased. Thus, the values of the CHUNK counters of the corresponding data blocks of the primary and backup FAS are inconsistent. By comparing the value of the counter, the abnormal data block can be determined by a simple check later, and the data block is reconstructed on the abnormal FAS.
  • the value of the CHUNK counter is definitely inconsistent.
  • the FLR initiates a CHUNK check request process to the FAS at the start time and at certain intervals.
  • the value of the CHUNK counter reported by the primary and backup FAS the value of the largest counter can be used to determine which data block on the FAS is valid. This way the data blocks on the exception FAS can be reconstructed.
  • the following is a specific example to illustrate the method of data block consistency management in the distributed file system of the present invention:
  • the identification of the data block CHUNK is: FILEID (four-byte unsigned integer) + CHU K number (2-byte unsigned integer) + counter (four-byte unsigned integer); there will be database records on the FLR side
  • Each CHUNK identifier includes the value of the CHUNK counter of the data block and its FAS location information; each data block is managed on the FAS side, and the value of its CHUNK counter is recorded.
  • the FAC first requests the FLR to allocate all FASs with backup relationships. After the assignment is successful, FLR will record the CHUNK identifier in the local database, and the initial value of the CHUNK counter of the data block is set to 1.
  • a data write request is then initiated directly by the FAC to the two FASs.
  • the status of each FAS write is continuously reported.
  • the reported status information includes: the currently written CHUNK identifier, and each FAS write status.
  • the FLR After receiving the reporting status, the FLR compares the write status of the two FASs. If the two FAS write statuses are normal, the processing is not processed; if the two FASs are not normal, the value of the FLUN side CHUNK counter is directly increased; If one of the FAS writes an exception and the other FAS writes normally, the FLR will initiate a CHUNK counter modification request to the normal FAS. After receiving the request, the normal FAS increases the value of the CHUNK counter corresponding to the local data block, and returns a modification success message to the FLR.
  • the FLR modifies the value in the local database to match the value of the CHUNK counter of the data block of the normal FAS, and the value of the CHUNK counter of the error data block on the abnormal FAS will not be modified.
  • the difference is that when the data is newly written, the FLR returns the FAS information of the new data block, or the FAS information of the already stored data.
  • the FLR will initiate the CHUNK check request process to the FAS at the start time and at certain time intervals.
  • the check method is as follows: FLR divides each primary and backup FAS as a group, and the entire cluster data block is divided into Several groups, such as N groups. For each group, a verification request is sent to each member.
  • the FAS that receives the request reports the local CHUNK identifier to the FLR.
  • the FLR will form the first received identification information into a HASH table. Subsequent receipt of the CHUNK identifier will first be searched in the HASH table, and the search succeeds, indicating that it is a pair of primary and backup data blocks. If it is not found, it may be the main and the backup. At the same time, all the CHUNK identification information of the data block is recorded. After a group of members is successfully verified, the FLR verifies each CHUNK identification information.
  • the verification process is shown in Figure 3. Show, including:
  • the first step is to check whether the data block CHUNK has a record in the FLR; if there is no record, delete it directly; if there is a record, pass the check;
  • the second step is to calculate the value of the FLR database and the CHUNK counter of each FAS, and compare which value is the largest.
  • the data block with the largest CHUNK counter value is valid and normal.
  • the third step is to verify the value of the CHUNK counter.
  • the specific process includes:
  • the CHUNK record needs to be deleted in the FLR database.
  • the value of the CHUNK counter on the FLR is smaller than the FAS, then the value of the CHUNK counter in the FLR database needs to be modified synchronously.
  • the structure of the distributed file system of the present invention is as shown in FIG. 4, which includes a file access server 401 and at least one file location registration server 402 connected via a network connection, such as an Ethernet connection, wherein each file access server 401 is also connected to a corresponding one.
  • the database 411, at least one file location registration server 402 is configured to generate a value of a counter corresponding to the data block in the write data operation of the file access server 401.
  • the user can make a data access request to the corresponding file access server 401 and the file location registration server 402 through a file access client 403;
  • the file access server 401 is provided with at least a primary and backup file access server, and the
  • the file access client 403 is configured to write data to the corresponding data block of the primary and secondary file access servers, and increase the value of the data block counter of the file server with normal write data by a predetermined step size;
  • the file location registration server 402 It can be used to determine whether the value of the corresponding data block counter reported by the primary and backup file access servers is consistent, determine whether the data block is abnormal or not, and control the reconstruction of the abnormal data block.
  • the distributed file system and the data block consistency management method thereof can easily and efficiently manage redundant backup data blocks in a massive cluster system, maintain consistency, and can reconstruct abnormal backup data blocks. Its main performance is:
  • the value of the CHUNK counter of the data block on the normal FAS may be increased first, and the data block CHUNK on the abnormal FAS may be maintained.
  • the value of the counter is not increased; when the subsequent FLR performs the timing check, the data block on the FAS with the low CHUNK counter value is deleted according to the check of the value of the above CHUNK counter, and the data block on the normal FAS is reconstructed. The corresponding data block on the exception FAS.
  • the method of the present invention uses a data block with a high CHUNK counter as a normal and valid data block. If the FLR record is the highest, it indicates that the data blocks on each FAS are unreliable; if the highest recorded on a FAS, then The data block needs to be reconstructed into the FAS with a low value, and the records in the FLR are modified.
  • the distributed file system and the data block consistency management method thereof are simple and accurate, and the verification calculation is fast, and can be applied to massive data block processing.
  • the invention provides a distributed file system and a method for data block consistency management thereof. Because the data block counter is used, each data block is recorded for abnormality and needs to be reconstructed, in a massive cluster system. The redundantly backed up data blocks can be managed simply and efficiently, maintaining consistency between them, and the abnormal backup data blocks can be reconstructed, which is simple and accurate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Description

一种分布式文件系统及其数据块一致性管理的方法 技术领域
本发明涉及一种计算机应用领域的大容量存储分布式文件系统及其管理 方法, 具体涉及的是一种大规模分布式数据处理的文件系统及其冗余备份数 据块一致性校验和备份管理的方法。 背景技术
现有技术中, 大规模分布式数据处理的文件系统, 为了保证数据处理的 高效和元数据的集中管理, 一般都设计成一个元数据的集中管理服务器(如: 文件定位寄存服务器, FLR, File Location Register ) , 和其他若干个数据文件 存放服务器(如: 文件访问服务器, FAS, File Access Service ) 的架构。
用户访问数据时都要先通过文件访问客户端 (FAC, File Access Client ) 向 FLR询问数据的具体存放位置, 然后由 FAC再向具体的 FAS发起读写数 据请求。 FAS 管理数据文件的方式是将文件数据划分为一个一个的数据块 ( CHUNK ) , 每个文件由若干个数据块组成。 数据块和文件的对应方式由统 一标识符 FILEID来标识, 每个文件都有一个不同于其他文件的 FILEID, 每 个数据块 CHUNK的标识 CHU KID是 FILEID + CHUNK编号。 文件的所有 CHUNK分布信息由 FLR统一放入数据库中管理。
在大容量的集群系统中, 通常数据块是冗余备份的, 也就是说同样的数 据块的拷贝备份存在在多个 FAS上。 但现有技术中这样备份的数据块的多个 拷贝很难保证一致性, 这是一个比较严重的问题, 主要出现在如下情况中: 在写操作的过程中, 如何保证同时在多个 FAS上写出互相对应的备份数据; 如果有一个 FAS出现异常或者损坏的情况下,该 FAS上的数据如何能重构得 到;如果在写的过程中, FLR出现异常,如何保证 FLR记录和 FAS的一致性。
由于涉及到海量的数据块,现有技术不可能对数据块釆用 MD5等常规的 校验方法, 因为这会严重影响处理的性能。
因此, 现有技术还有待于改进和发展。 发明内容
本发明的目的在于提供一种分布式文件系统及其数据块一致性管理的方 法, 解决上述现有技术问题, 并实现对海量数据的数据块进行校验以及必要 时的重构。
本发明的技术方案包括:
一种分布式文件系统中数据块一致性管理的方法, 所述方法包括以下步 骤:
A、 在生成每一数据块时, 由文件定位寄存服务器生成对应的计数器的 值, 同时存放在文件访问服务器和文件定位寄存服务器上;
B、在对一数据块写数据时, 文件访问客户端同时向主、备文件访问服务 器写数据, 如果都写数据成功, 则不对计数器的值进行修改; 否则, 对写数 据正常的文件访问服务器中的数据块计数器的值按预定步长进行增加;
C、所述文件定位寄存服务器根据主、备文件访问服务器上报的对应数据 块计数器的值, 以最大计数器的值之数据块为正常和有效, 对异常之数据块 进行重构。
进一步的, 所述的方法还包括:
在修改数据块数据时, 所述文件定位寄存服务器给所述文件访问客户端 返回该数据块所在的主、 备文件访问服务器信息, 所述文件访问客户端向所 述主、 备文件访问服务器发起修改数据操作;
如果主、 备文件访问服务器修改数据成功, 则不发起该数据块计数器值 的修改; 否则, 对正常更改数据的数据块所在文件访问服务器的计数器的值 按预定步长增加, 同时按预定步长增加所述文件定位寄存服务器上的数据块 的计数器的值。
进一步的, 所述的方法中, 所述预定步长为 1。
进一步的, 所述的方法中, 所述步骤 C还包括: 所述文件定位寄存服务 器在启动时刻和以一定的时间间隔向所述文件访问服务器发起数据块校验请 求。
进一步的, 所述的方法中, 还包括数据块校验过程, 所述数据块校验过 程包括:
Dl、 所述文件访问服务器把本地的所有数据块标识上报给文件定位寄存 服务器, 所述文件定位寄存服务器将先收到的数据块标识组成 HASH表, 后 续收到的数据块标识在该 HASH表中查找, 查找成功, 表示是一组主、 备数 据块;
D2、 记录下所有组数据块标识, 所述文件定位寄存服务器校验每个数据 块标识。
进一步的, 所述的方法中, 所述文件定位寄存服务器将存有同一数据块 拷贝备份的主、 备文件访问服务器作为一组, 将系统中的所有文件访问服务 器分成若干组。
进一步的, 所述的方法中, 所述步骤 D2 中校险每组数据块标识所对应 的数据块的所述步骤还包括:
D21、 检查该数据块在所述文件定位寄存服务器中是否有记录; 如果没 有则直接删除; 否则, 转入步骤 D22;
D22、 比较所述文件定位寄存服务器数据库中和各个文件访问服务器的 对应数据块之计数器的值, 以最大值之数据块为有效。
进一步的, 所述的方法中, 所述步骤 D2还包括:
如果所述文件定位寄存服务器上的数据块计数器的值最大, 则在所述文 件定位寄存服务器数据库中删除该数据块的记录。
进一步的, 所述的方法中, 所述步骤 D2还包括:
如果有文件访问服务器上具有最大的数据块之计数器的值, 则由所述文 件定位寄存服务器对计数器的值较小的数据块所在的其他文件访问服务器发 起数据块重构请求, 从有效数据块复制到异常的数据块上;
数据拷贝完成后, 修改各个文件访问服务器上对应数据块的计数器的值 同最大值一致。
进一步的, 所述的方法中, 所述步骤 D2还包括:
如果所述文件定位寄存服务器上的数据块之计数器的值比所述文件访问 服务器上的小, 则同步修改该文件定位寄存服务器数据库中的该对应数据块 之计数器的值。
一种所述方法的分布式文件系统, 包括通过网络连接的一文件访问服务 器、 至少一文件定位寄存服务器; 所述文件访问服务器对应连接一数据库; 用户通过一文件访问客户端向所述文件访问服务器及所述文件定位寄存服务 器进行写数据请求, 并对写数据正常的文件服务器的数据块计数器的值按预 定步长增加; 其中, 所述文件访问服务器至少设置有主、 备文件访问服务器; 并且
所述文件定位寄存服务器用于生成数据块对应的计数器的值,并根据主、 备文件访问服务器上报的对应数据块计数器的值, 控制对异常之数据块进行 重构。
本发明所提供的一种分布式文件系统及其数据块一致性管理的方法, 由 于釆用了数据块计数器的方式, 对每个数据块记录其是否异常和需要重构, 在海量集群系统中, 可以简单高效地管理冗余备份的数据块, 保持其之间的 一致性, 并且可以重构异常的备份数据块, 其实现简单准确。 附图概述
图 1是本发明方法写或更改数据时数据块计数器的修改流程示意图; 图 2是本发明方法的文件定位寄存服务器 FLR校验接收文件访问服务器 FAS上报数据块 CHUNK的流程图;
图 3 是本发明方法的文件定位寄存服务器 FLR具体校验方法流程示意 图;
图 4为本发明分布式文件系统的结构示意图。 本发明的较佳实施方式
以下结合附图, 将对本发明的各较佳实施例进行更为详细的说明。
本发明公布了一种分布式文件系统及其数据块一致性管理的方法, 提出 了数据块计数器即 CHUNK计数器的概念, 针对每个数据块都给予一个计数 器, 表示该数据块被修改的次数。 每次修改 CHUNK都将计数器的值增加预 定的步长, 这样如果主、 备数据块的计数器的值不一致, 则表示有无效数据 块存在, 可相应对异常的数据块进行重构。
本发明方法很好地解决了主、 备数据块的管理工作, 其主要的实现内容 包括:
在生成数据块时, 都由文件定位寄存服务器 FLR统一生成, 第一次创建 的数据块其计数器的值为 1。 该值同时存放于文件访问服务器 FAS上和文件 定位寄存服务器 FLR上。
在用户发起写 CHUNK数据的过程中, 为了直观起见, 本文的实施例中 以主、 备 FAS各一个的情况为例进行说明, 如图 1所示, FAC同时写两份数 据给主、 备 FAS, 如果主、 备 FAS写数据都是成功的, 则不发起 CHUNK计 数器的修改流程。 如果写的过程中, 发现某个 FAS出现写异常, FAC向正常 的 FAS发起计数器修改流程,修改该正常数据块的当前 CHUNK计数器的值, 这样数据块主、 备 FAS的 CHUNK计数器的值就出现了不一致的情况, 并且 正常的数据块的 CHUNK计数器的值要高。 后期可以通过简单的校验确定出 异常的数据块, 并在异常 FAS上重构该数据块。
在用户发起更改文件内容时, FLR会给 FAC返回对应数据块所在的两个 FAS的信息, FAC直接向两个 FAS发起修改数据的操作。 如果主、 备 FAS 修改数据都是成功的, 则不发起 CHUNK计数器修改流程。 如果写的过程中, 发现某个 FAS出现异常, FAC向正常的 FAS发起 CHUNK计数器修改流程, 使所述 FAS上对应的 CHUNK计数器的值增加预定步长,同时增加 FLR上的 该 CHUNK计数器的值, 这样主、 备 FAS的对应数据块的 CHUNK计数器的 值就不一致了。 通过比较计数器的值, 后期可以通过简单校验确定异常的数 据块, 并在异常 FAS上重构该数据块。
经过上述处理过程, 可以保证如果出现异常时, 则主、 备 FAS 上的
CHUNK计数器的值肯定不一致。 FLR会在启动时刻和以一定的时间间隔向 FAS发起 CHUNK校验请求流程。 根据主、 备 FAS上报的 CHUNK计数器的 值,以最大的计数器的值为准,即可确定哪个 FAS上的数据块是正常有效的。 这样对异常 FAS上的数据块就可以重新构造。 以下举具体实例说明本发明的分布式文件系统中数据块一致性管理的方 法:
定义数据块 CHUNK的标识为: FILEID (四字节无符号整型 )+CHU K编 号 ( 2字节无符号整型) +计数器 (四字节无符号整型); 在 FLR侧会有数据 库记录每个 CHUNK标识, 其中包括了该数据块的 CHUNK计数器的值和其 所在 FAS位置信息; 在 FAS侧管理每个数据块, 并且记录其 CHUNK计数器 的值。
如图 1所示,在用户发起写流程时,首先由 FAC向 FLR申请分配具有备 份关系的所有 FAS。 分配成功后, FLR会将 CHUNK标识记入本地数据库, 数据块的 CHUNK计数器的初始值设为 1。
然后由 FAC直接向两个 FAS发起数据写入请求。 在 FAC写数据的过程 中,会不停的上报各个 FAS写的状态。上报的状态信息包括:当前写的 CHUNK 标识, 每个 FAS写状态。
FLR收到上报状态后, 比较两个 FAS的写状态,如果两个 FAS写状态都 是正常则不处理; 如果两个 FAS都不正常, 则直接增加 FLR侧 CHUNK计数 器的值; 如果发现某时刻其中一个 FAS写异常, 而另一个 FAS写正常, 则 FLR会向正常的 FAS发起 CHUNK计数器修改请求。所述正常 FAS收到请求 后,增加本地数据块对应的 CHUNK计数器的值,返回修改成功消息给 FLR。
FLR收到修改成功消息后, 将本地数据库中的值修改为同正常 FAS的数据块 的 CHUNK计数器的值一致, 而异常 FAS上出错数据块的 CHUNK计数器的 值将得不到修改。
当用户发起改写时,基本同上述处理过程, 所不同的是新写数据时, FLR 返回的是新的数据块所在的 FAS信息, 或已经存有数据的 FAS信息。
FLR会在启动时刻和以一定的时间间隔向 FAS发起 CHUNK校验请求流 程, 如图 2所示, 其校验方法是: FLR将每个主、 备 FAS作为一组, 整个集 群数据块会分成若干组, 例如 N组。 对每一组, 分别向每个成员发起校验请 求,收到请求的 FAS,会把本地的所有数据块 CHUNK标识上报给 FLR, FLR 会将第一个收到的标识信息组成一个 HASH表, 后续收到 CHUNK标识会先 在 HASH表中查找, 查找成功, 表示是一对主、 备数据块。 如果查找不到则有可能是主、备不全; 同时记录下所有对数据块 CHUNK 标识信息, 一组成员校验成功后, FLR会校验每个 CHUNK标识信息, 其校 验过程如图 3所示, 包括:
第一步、 检查该数据块 CHUNK在 FLR中是否有记录; 如果没有记录则 直接删除, 如果有记录则通过检查;
第二步、计算 FLR数据库和各个 FAS的 CHUNK计数器的值, 比较哪个 值最大, 以 CHUNK计数器的值最大的数据块为有效和正常的。
第三步、 校验 CHUNK计数器的值, 其具体过程包括:
如果 FLR上的 CHUNK计数器的值最大, 说明当前所有 FAS 上的该 CHUNK数据都是不可靠的, 需要在 FLR数据库中删除该 CHUNK记录。
如果有 FAS具有最大的 CHUNK计数器的值, 则对 CHUNK计数器的值 小的数据块所在的所有 FAS, FLR会发起数据块重构请求, 即告诉计数器的 值最大的 FAS, 其上面的某个数据块需要从本地复制到异常的 FAS上。 拷贝 完成后, 立即修改各个 FAS上对应数据块的 CHUNK计数器的值同最大值一 致。
如果 FLR上的 CHUNK计数器的值比 FAS的小,则需要同步修改该 FLR 数据库中的该 CHUNK计数器的值。
本发明分布式文件系统的结构如图 4所示, 其包括通过网络连接如以太 网连接的一文件访问服务器 401和至少一文件定位寄存服务器 402, 其中, 每一文件访问服务器 401还连接一对应的数据库 411 , 至少一文件定位寄存 服务器 402用于产生针对文件访问服务器 401的写数据操作中的数据块对应 的计数器的值。 用户可通过一文件访问客户端 403向对应的所述文件访问服 务器 401及所述文件定位寄存服务器 402进行数据访问请求; 所述文件访问 服务器 401 至少设置有主、 备文件访问服务器, 而所述文件访问客户端 403 用于向所述主、 备文件访问服务器的对应数据块写数据, 并对写数据正常的 文件服务器之数据块计数器的值按预定步长增加; 所述文件定位寄存服务器 402 可用于根据主、 备文件访问服务器上报的对应数据块计数器的值是否一 致, 判断数据块的异常与否, 控制对异常之数据块进行重构。 本发明分布式文件系统及其数据块一致性管理的方法, 在海量集群系统 中可以简单高效地管理冗余备份的数据块, 保持其一致性, 并且可以重构异 常的备份数据块。 其主要表现在:
1、 在用户及时存储(追加写或改写)数据的过程中, 如果发现主、 备 FAS其中的一方有异常, 可以先增加正常 FAS上数据块的 CHUNK计数器的 值,保持异常 FAS上数据块 CHUNK计数器的值没有增加; 在后续 FLR进行 定时校验时, 会根据上述 CHUNK计数器的值的校验, 将 CHUNK计数器的 值低的 FAS上的数据块删除, 同时从正常 FAS上的数据块重构该异常 FAS 上的对应数据块。
2、 本发明方法以 CHUNK计数器的值高的数据块为正常和有效的数据 块, 如果 FLR记录的最高, 则说明各个 FAS上的数据块都不可靠; 如果某个 FAS上记录的最高,则需要将该数据块重构到值低的 FAS中,同时要修改 FLR 中的记录。
由此可知, 本发明所述分布式文件系统及其数据块一致性管理的方法其 实现简单准确, 校验计算快捷, 可适用于海量的数据块处理。
应当理解的是, 上述针对本发明较佳实施例的描述较为具体, 并不能因 此而理解为对本发明专利保护范围的限制, 本发明的专利保护范围应以所附 权利要求为准。
工业实用性
本发明所提供的一种分布式文件系统及其数据块一致性管理的方法, 由 于釆用了数据块计数器的方式, 对每个数据块记录其是否异常和需要重构, 在海量集群系统中, 可以简单高效地管理冗余备份的数据块, 保持其之间的 一致性, 并且可以重构异常的备份数据块, 其实现简单准确。

Claims

权 利 要 求 书
1、 一种分布式文件系统中数据块一致性管理的方法, 所述方法包括以下 步骤:
A、 在生成一数据块时, 由文件定位寄存服务器生成所生成的数据块对 应的计数器的值, 所生成的数据块对应的计数器的值同时存放在文件访问服 务器和所述文件定位寄存服务器上;
B、在对一数据块写数据时, 文件访问客户端同时向主、备文件访问服务 器写数据, 如果所述主、 备文件访问服务器上的写数据都成功, 则不对被写 数据的数据块的计数器的值进行修改; 否则, 所述文件访问客户端对写数据 成功的文件访问服务器中的被写数据的数据块的计数器的值按预定步长进行 增力口;
c、所述文件定位寄存服务器根据所述主、备文件访问服务器分别上报的 当前数据块的计数器的值, 以值最大的计数器所在的文件访问服务器上的数 据块为正常和有效的数据块, 以其他文件访问服务器上的数据块为异常的数 据块, 并对异常的数据块进行重构。
2、 根据权利要求 1所述的方法, 其中, 所述方法还包括:
在修改数据块数据时, 所述文件定位寄存服务器给所述文件访问客户端 返回要修改的数据块所在的主、 备文件访问服务器信息, 所述文件访问客户 端向所述主、 备文件访问服务器发起修改数据操作;
如果所述主、 备文件访问服务器上的修改数据操作都成功, 则所述文件 访问客户端不发起对被修改的数据块的计数器的值的修改; 否则, 所述文件 访问客户端对修改数据操作成功的文件访问服务器上的被修改的数据块的计 数器的值按预定步长进行增加, 同时按预定步长增加所述文件定位寄存服务 器上的被修改的数据块的计数器的值。
3、 根据权利要求 2所述的方法, 其中, 所述预定步长为 1。
4、 根据权利要求 3所述的方法, 其中, 所述步骤 C还包括:
所述文件定位寄存服务器在启动时刻和以一定的时间间隔向所述文件访 问服务器发起数据块校验请求。
5、 根据权利要求 4所述的方法, 其还包括数据块校验过程, 所述数据块 校验过程包括:
D1、 所述文件访问服务器把本地的所有数据块标识上报给所述文件定位 寄存服务器,所述文件定位寄存服务器将先收到的数据块标识组成 HASH表, 后续收到的数据块标识在所述 HASH表中查找与之相匹配的数据块标识, 查 找成功, 得到互相匹配的数据块标识所对应的数据块为一组主、 备数据块;
D2、 所述文件定位寄存服务器记录下所有对数据块标识, 并校验每组数 据块标识所对应的数据块。
6、 根据权利要求 5所述的方法, 其中,
所述文件定位寄存服务器将存有同一数据块拷贝备份的主、 备文件访问 服务器作为一组, 将系统中的所有文件访问服务器分成若干组。
7、 根据权利要求 5所述的方法, 其中, 所述步骤 D2中校验每组数据块 标识所对应的数据块的所述步骤包括:
D21、 检查被校验的数据块在所述文件定位寄存服务器中是否有记录; 如果没有则直接从所述文件访问服务器上删除被校验的数据块; 否则, 转入 步骤 D22;
D22、 比较所述文件定位寄存服务器和各个文件访问服务器中被校验的 数据块的计数器的值, 以最大的计数器的值对应的数据块为有效。
8、 根据权利要求 7所述的方法, 其中,
如果所述文件定位寄存服务器上的被校验的数据块的计数器的值最大, 则在所述文件定位寄存服务器的数据库中删除所述被校验的数据块的记录。
9、 根据权利要求 7所述的方法, 其中,
如果有文件访问服务器上具有最大的被校验的数据块的计数器的值, 则 所述文件定位寄存服务器对计数器的值较小的数据块所在的文件访问服务器 发起数据块重构请求, 将值最大的计数器所在的文件访问服务器上的数据块 拷贝至值较小的计数器所在的文件访问服务器;
数据拷贝完成后, 修改各个文件访问服务器上被校验的数据块的计数器 的值同最大的计数器的值一致。
10、 根据权利要求 7所述的方法, 其中,
如果所述文件定位寄存服务器上的被校验的数据块的计数器的值比所述 文件访问服务器上的小, 则同步修改所述文件定位寄存服务器的数据库中的 被校验的数据块的计数器的值。
11、 一种分布式文件系统, 所述系统包括通过网络连接的一个文件访问 服务器和至少一文件定位寄存服务器;所述文件访问服务器连接一个数据库; 用户通过文件访问客户端向所述文件访问服务器及所述文件定位寄存服务器 进行写数据请求, 并对写数据正常的文件服务器的数据块计数器的值按预定 步长增加; 其中, 所述文件访问服务器至少设置有主、 备文件访问服务器; 并且
所述文件定位寄存服务器用于生成数据块对应的计数器的值,并根据主、 备文件访问服务器上报的数据块计数器的值, 控制对异常数据块的重构。
PCT/CN2009/000855 2008-08-04 2009-07-30 一种分布式文件系统及其数据块一致性管理的方法 WO2010015143A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP09804443A EP2330519A4 (en) 2008-08-04 2009-07-30 DISTRIBUTED FILE SYSTEM AND METHOD FOR DATA BLOCK CONSISTENCY MANAGEMENT THEREFOR
US13/057,187 US8285689B2 (en) 2008-08-04 2009-07-30 Distributed file system and data block consistency managing method thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200810142291.2 2008-08-04
CN2008101422912A CN101334797B (zh) 2008-08-04 2008-08-04 一种分布式文件系统及其数据块一致性管理的方法

Publications (1)

Publication Number Publication Date
WO2010015143A1 true WO2010015143A1 (zh) 2010-02-11

Family

ID=40197395

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/000855 WO2010015143A1 (zh) 2008-08-04 2009-07-30 一种分布式文件系统及其数据块一致性管理的方法

Country Status (5)

Country Link
US (1) US8285689B2 (zh)
EP (1) EP2330519A4 (zh)
CN (1) CN101334797B (zh)
RU (1) RU2449358C1 (zh)
WO (1) WO2010015143A1 (zh)

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008018680A1 (de) * 2007-12-18 2009-07-02 Siemens Aktiengesellschaft Verfahren zum Unterstützen eines sicherheitsgerichteten Systems
CN101334797B (zh) * 2008-08-04 2010-06-02 中兴通讯股份有限公司 一种分布式文件系统及其数据块一致性管理的方法
US8849939B2 (en) * 2011-12-02 2014-09-30 International Business Machines Corporation Coordinating write sequences in a data storage system
CN102750322B (zh) * 2012-05-22 2014-11-05 中国科学院计算技术研究所 一种机群文件系统分布式元数据一致性保证方法和系统
CN102841931A (zh) * 2012-08-03 2012-12-26 中兴通讯股份有限公司 分布式文件系统的存储方法及装置
CN102890716B (zh) * 2012-09-29 2017-08-08 南京中兴新软件有限责任公司 分布式文件系统和分布式文件系统的数据备份方法
CN103729436A (zh) * 2013-12-27 2014-04-16 中国科学院信息工程研究所 一种分布式元数据管理方法及系统
US9292389B2 (en) * 2014-01-31 2016-03-22 Google Inc. Prioritizing data reconstruction in distributed storage systems
US9772787B2 (en) 2014-03-31 2017-09-26 Amazon Technologies, Inc. File storage using variable stripe sizes
US9449008B1 (en) 2014-03-31 2016-09-20 Amazon Technologies, Inc. Consistent object renaming in distributed systems
US9294558B1 (en) 2014-03-31 2016-03-22 Amazon Technologies, Inc. Connection re-balancing in distributed storage systems
US10264071B2 (en) 2014-03-31 2019-04-16 Amazon Technologies, Inc. Session management in distributed storage systems
US9602424B1 (en) 2014-03-31 2017-03-21 Amazon Technologies, Inc. Connection balancing using attempt counts at distributed storage systems
US9274710B1 (en) 2014-03-31 2016-03-01 Amazon Technologies, Inc. Offset-based congestion control in storage systems
US9779015B1 (en) 2014-03-31 2017-10-03 Amazon Technologies, Inc. Oversubscribed storage extents with on-demand page allocation
US9519510B2 (en) 2014-03-31 2016-12-13 Amazon Technologies, Inc. Atomic writes for multiple-extent operations
US9569459B1 (en) 2014-03-31 2017-02-14 Amazon Technologies, Inc. Conditional writes at distributed storage services
US9495478B2 (en) 2014-03-31 2016-11-15 Amazon Technologies, Inc. Namespace management in distributed storage systems
US10372685B2 (en) 2014-03-31 2019-08-06 Amazon Technologies, Inc. Scalable file storage service
US10536523B2 (en) 2014-05-11 2020-01-14 Microsoft Technology Licensing, Llc File service using a shared file access-rest interface
CN105335250B (zh) * 2014-07-28 2018-09-28 浙江大华技术股份有限公司 一种基于分布式文件系统的数据恢复方法及装置
US10108624B1 (en) 2015-02-04 2018-10-23 Amazon Technologies, Inc. Concurrent directory move operations using ranking rules
CN104699771B (zh) * 2015-03-02 2019-09-20 北京京东尚科信息技术有限公司 数据同步方法和集群节点
US10346367B1 (en) 2015-04-30 2019-07-09 Amazon Technologies, Inc. Load shedding techniques for distributed services with persistent client connections to ensure quality of service
US9860317B1 (en) 2015-04-30 2018-01-02 Amazon Technologies, Inc. Throughput throttling for distributed file storage services with varying connection characteristics
US10747753B2 (en) 2015-08-28 2020-08-18 Swirlds, Inc. Methods and apparatus for a distributed database within a network
US9529923B1 (en) 2015-08-28 2016-12-27 Swirlds, Inc. Methods and apparatus for a distributed database within a network
US9390154B1 (en) 2015-08-28 2016-07-12 Swirlds, Inc. Methods and apparatus for a distributed database within a network
CN105426483B (zh) * 2015-11-19 2019-01-11 华为技术有限公司 一种基于分布式系统的文件读取方法及装置
US10545927B2 (en) 2016-03-25 2020-01-28 Amazon Technologies, Inc. File system mode switching in a distributed storage service
US10474636B2 (en) 2016-03-25 2019-11-12 Amazon Technologies, Inc. Block allocation for low latency file systems
US10140312B2 (en) 2016-03-25 2018-11-27 Amazon Technologies, Inc. Low latency distributed storage service
CN105892954A (zh) * 2016-04-25 2016-08-24 乐视控股(北京)有限公司 基于多副本的数据存储方法和装置
PT3539026T (pt) 2016-11-10 2022-03-08 Swirlds Inc Métodos e aparelhos para uma base de dados distribuída que inclui entradas anónimas
KR102433285B1 (ko) * 2016-12-19 2022-08-16 스월즈, 인크. 이벤트들의 삭제를 가능하게 하는 분산 데이터베이스를 위한 방법 및 장치
CN108241548A (zh) * 2016-12-23 2018-07-03 航天星图科技(北京)有限公司 一种基于分布式系统的文件读取方法
CN107071031B (zh) * 2017-04-19 2019-11-05 电子科技大学 基于chunk块版本号的分布式块存储系统数据恢复判定方法
KR102348418B1 (ko) 2017-07-11 2022-01-07 스월즈, 인크. 네트워크 내의 분산 데이터베이스를 효율적으로 구현하기 위한 방법들 및 장치
US10296821B2 (en) * 2017-08-17 2019-05-21 Assa Abloy Ab RFID devices and methods of making the same
CA3076257A1 (en) 2017-11-01 2019-05-09 Swirlds, Inc. Methods and apparatus for efficiently implementing a fast-copyable database
RU2696212C1 (ru) * 2018-01-30 2019-07-31 Леонид Евгеньевич Посадсков Способ обеспечения защищенной передачи данных в облачных хранилищах с использованием частичных образов
CN111008026B (zh) 2018-10-08 2024-03-26 阿里巴巴集团控股有限公司 集群管理方法、装置及系统
KR20220011161A (ko) 2019-05-22 2022-01-27 스월즈, 인크. 분산 데이터베이스에서 상태 증명들 및 원장 식별자들을 구현하기 위한 방법들 및 장치

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6643672B1 (en) * 2000-07-31 2003-11-04 Hewlett-Packard Development Company, Lp. Method and apparatus for asynchronous file writes in a distributed file system
US7065618B1 (en) * 2003-02-14 2006-06-20 Google Inc. Leasing scheme for data-modifying operations
CN101334797A (zh) * 2008-08-04 2008-12-31 中兴通讯股份有限公司 一种分布式文件系统及其数据块一致性管理的方法

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544347A (en) * 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US6256642B1 (en) * 1992-01-29 2001-07-03 Microsoft Corporation Method and system for file system management using a flash-erasable, programmable, read-only memory
US6119151A (en) * 1994-03-07 2000-09-12 International Business Machines Corp. System and method for efficient cache management in a distributed file system
US5634096A (en) * 1994-10-31 1997-05-27 International Business Machines Corporation Using virtual disks for disk system checkpointing
US5933847A (en) * 1995-09-28 1999-08-03 Canon Kabushiki Kaisha Selecting erase method based on type of power supply for flash EEPROM
US6052797A (en) * 1996-05-28 2000-04-18 Emc Corporation Remotely mirrored data storage system with a count indicative of data consistency
US6460054B1 (en) * 1999-12-16 2002-10-01 Adaptec, Inc. System and method for data storage archive bit update after snapshot backup
US7194504B2 (en) * 2000-02-18 2007-03-20 Avamar Technologies, Inc. System and method for representing and maintaining redundant data sets utilizing DNA transmission and transcription techniques
JP4473513B2 (ja) * 2003-02-27 2010-06-02 富士通株式会社 原本性検証装置および原本性検証プログラム
US7624021B2 (en) * 2004-07-02 2009-11-24 Apple Inc. Universal container for audio data
US7584220B2 (en) * 2004-10-01 2009-09-01 Microsoft Corporation System and method for determining target failback and target priority for a distributed file system
US7739239B1 (en) * 2005-12-29 2010-06-15 Amazon Technologies, Inc. Distributed storage system with support for distinct storage classes
US7716180B2 (en) * 2005-12-29 2010-05-11 Amazon Technologies, Inc. Distributed storage system with web services client interface
CN100437502C (zh) * 2005-12-30 2008-11-26 联想(北京)有限公司 基于安全芯片的防病毒方法
CN1859204A (zh) * 2006-03-21 2006-11-08 华为技术有限公司 实现双机热备份中同步数据的方法及装置
CN101715575A (zh) * 2006-12-06 2010-05-26 弗森多系统公司(dba弗森-艾奥) 采用数据管道管理数据的装置、系统和方法
CN100445963C (zh) * 2007-02-15 2008-12-24 华为技术有限公司 一种实现高可靠性空闲链表的方法及装置
JP4897524B2 (ja) * 2007-03-15 2012-03-14 株式会社日立製作所 ストレージシステム及びストレージシステムのライト性能低下防止方法
US7975109B2 (en) * 2007-05-30 2011-07-05 Schooner Information Technology, Inc. System including a fine-grained memory and a less-fine-grained memory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6643672B1 (en) * 2000-07-31 2003-11-04 Hewlett-Packard Development Company, Lp. Method and apparatus for asynchronous file writes in a distributed file system
US7065618B1 (en) * 2003-02-14 2006-06-20 Google Inc. Leasing scheme for data-modifying operations
CN101334797A (zh) * 2008-08-04 2008-12-31 中兴通讯股份有限公司 一种分布式文件系统及其数据块一致性管理的方法

Also Published As

Publication number Publication date
US20110161302A1 (en) 2011-06-30
CN101334797A (zh) 2008-12-31
CN101334797B (zh) 2010-06-02
US8285689B2 (en) 2012-10-09
EP2330519A4 (en) 2011-11-23
EP2330519A1 (en) 2011-06-08
RU2449358C1 (ru) 2012-04-27

Similar Documents

Publication Publication Date Title
WO2010015143A1 (zh) 一种分布式文件系统及其数据块一致性管理的方法
US11836155B2 (en) File system operation handling during cutover and steady state
US11755415B2 (en) Variable data replication for storage implementing data backup
WO2017049764A1 (zh) 数据读写方法及分布式存储系统
JP5254611B2 (ja) 固定内容分散データ記憶のためのメタデータ管理
JP6196368B2 (ja) 分散型データベースシステムのシステム全体のチェックポイント回避
JP6275816B2 (ja) 分散型データベースシステム用高速クラッシュ回復
JP6404907B2 (ja) 効率的な読み取り用レプリカ
CN105393243B (zh) 事务定序
US8572037B2 (en) Database server, replication server and method for replicating data of a database server by at least one replication server
US9424140B1 (en) Providing data volume recovery access in a distributed data store to multiple recovery agents
KR100946986B1 (ko) 파일 저장 시스템 및 파일 저장 시스템에서의 중복 파일관리 방법
WO2023046042A1 (zh) 一种数据备份方法和数据库集群
JP5516575B2 (ja) データ挿入システム
JP2013544386A (ja) 分散型データベースにおいてインテグリティを管理するためのシステム及び方法
WO2012126232A1 (zh) 一种数据备份恢复的方法、系统和服务节点
US20150046398A1 (en) Accessing And Replicating Backup Data Objects
US9984139B1 (en) Publish session framework for datastore operation records
US11728976B1 (en) Systems and methods for efficiently serving blockchain requests using an optimized cache
US10803012B1 (en) Variable data replication for storage systems implementing quorum-based durability schemes
EP3147789B1 (en) Method for re-establishing standby database, and apparatus thereof
WO2022242372A1 (zh) 对象处理方法、装置、计算机设备和存储介质
WO2021109777A1 (zh) 一种数据文件的导入方法及装置
US11461192B1 (en) Automatic recovery from detected data errors in database systems
WO2024051027A1 (zh) 一种大数据的数据配置方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09804443

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2009804443

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2011107514

Country of ref document: RU

Ref document number: A20110281

Country of ref document: BY

WWE Wipo information: entry into national phase

Ref document number: 13057187

Country of ref document: US