WO2017096942A1 - Système de mémorisation de fichiers, procédé de programmation de données et nœud de données - Google Patents
Système de mémorisation de fichiers, procédé de programmation de données et nœud de données Download PDFInfo
- Publication number
- WO2017096942A1 WO2017096942A1 PCT/CN2016/095532 CN2016095532W WO2017096942A1 WO 2017096942 A1 WO2017096942 A1 WO 2017096942A1 CN 2016095532 W CN2016095532 W CN 2016095532W WO 2017096942 A1 WO2017096942 A1 WO 2017096942A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- written
- node
- data node
- distributed storage
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
Definitions
- the present invention relates to the field of file systems, and in particular, to a file storage system, a data scheduling method, and a data node.
- HDFS Hadoop Distributed File System
- the system architecture of the HDFS is as shown in FIG. 1, and includes: a client 11 and a server group 12.
- the client 11 includes a distributed file system module 111 and a file system data output stream (FSData OutputStream) module 112.
- the server group adopts a master-slave structure, and consists of a name node (NN) 121 and a plurality of data nodes (DNs) 122.
- the name node 121 is a main server that manages the file namespace and adjusts the client access file; the data node 122 is used to store data, generally one data node corresponds to one server, and each data node corresponds to a distributed storage subsystem. Storage with distributed storage.
- the client Before using the above HDFS system to write data, the client first initiates an RPC request to the remote NN node through the DistributedFileSystem module; the NN node creates a new file in the file system namespace; the DistributedFileSystem module returns a DFSOutputStream to the HDFS client, and then The client starts writing data.
- the client starts writing data, and DFSOutputStream divides the data into blocks and writes it into the data queue.
- the Data queue is read by the Data Streamer and informs the name node to allocate data nodes for storing data blocks (each data block corresponds to 3 data nodes by default).
- Data Streamer uses pipelines to sequentially write data into allocated data nodes to achieve mutual backup of data blocks between multiple data nodes.
- each data node corresponds to a distributed storage device, and the distributed storage device refers to a plurality of physical disks.
- the data node forwards the written block data to the distributed storage device through the IO, triggers the write process of the distributed storage device, and the distributed storage device writes the data to the primary physical disk, and simultaneously sends the copy request to the standby physical disk to implement Multiple backup data (3 copies by default) is written on the distributed storage device.
- Data Streamer closes the write stream and notifies the name node that the data has been written.
- the writing operation of the next data block is performed only when all the data nodes complete the writing of the data, and the data writing speed is slow.
- the invention provides a file storage system, a data scheduling method and a data node, which can speed up data writing speed.
- the first aspect of the present invention provides a file storage system, where the server side of the file storage system includes:
- a name node a primary data node, and at least one backup data node
- the primary data node and the at least one backup data node share a first distributed storage subsystem, the first distributed storage subsystem including a primary storage device and at least one backup storage device;
- the master data node is configured to receive a write operation instruction sent by the client, where the write operation instruction includes data to be written; and the data to be written is written into the first distributed storage subsystem; Sending an update request to the first backup data node, where the update request includes a storage location of the data to be written in the first distributed storage subsystem and attribute information of the data to be written;
- the backup data node is configured to receive an update request; according to the storage location of the data to be written in the first distributed storage subsystem and the attribute information of the data to be written in the update request,
- the first distributed storage subsystem searches for the data to be written; when the data to be written is found, the attribute information of the data to be written is saved.
- the operation permission of the primary data node to the first distributed storage subsystem is to allow a read/write operation; the backup data section
- the operational authority to point to the first distributed storage subsystem is to allow only read operations.
- the embodiment of the present invention further provides a data scheduling method, where the method is applied to the file storage system according to the first aspect, the method includes:
- the main data node receives a write operation instruction sent by the client, where the write operation instruction includes data to be written;
- the primary data node sends an update request to the first backup data node, where the update request includes a storage location of the data to be written in the first distributed storage subsystem and attribute information of the data to be written .
- the attribute information of the data to be written includes a name and a size of the data to be written.
- the method further includes:
- the first data node fails, recovering the system file of the failed data node, and obtaining the restored data node, where the first data node is any one of all data nodes;
- the first distributed subsystem is mounted to the restored data node.
- an embodiment of the present invention further provides a data scheduling method, including:
- the backup data node receives an update request, where the update request includes a storage location of the data to be written in the first distributed storage subsystem and attribute information of the data to be written;
- the backup data node searches for the storage location in the first distributed storage subsystem and the attribute information of the data to be written according to the storage data of the to-be-written data in the first distributed storage subsystem. Describe the data to be written;
- the backup data node saves the attribute information of the data to be written.
- an embodiment of the present invention provides a data node, including:
- a receiving module configured to receive a write operation instruction sent by the client, where the write operation instruction includes data to be written
- a write circuit configured to write the data to be written into the first distributed storage subsystem
- a sending module configured to send an update request to the first backup data node, where the update request includes a storage location of the to-be-written data in the first distributed storage subsystem and the number of to-be-written According to the attribute information.
- the attribute information of the data to be written includes a name and a size of the data to be written.
- an embodiment of the present invention further provides a data node, including:
- a receiving module configured to receive an update request, where the update request includes a storage location of the data to be written in the first distributed storage subsystem and attribute information of the data to be written;
- a processing module configured to search for a location in the first distributed storage subsystem according to a storage location of the to-be-written data in the first distributed storage subsystem and attribute information of the to-be-written data Describe the data to be written;
- a storage module configured to: when the data to be written is found, the backup data node saves the attribute information of the data to be written.
- a plurality of data nodes share a distributed storage subsystem
- the distributed storage subsystem includes a primary storage device and at least one backup storage device, which can implement mutual data between the storage devices. Backup.
- the primary data node writes the data to be written into the first distributed storage subsystem, and then sends an update request to the first backup data node to notify the first backup data node that the data is to be written.
- the attribute information and the storage location of the data to be written, the first backup data node only needs to view the first distributed storage subsystem according to the update request, and determines that the data to be written has been written in the first distributed storage subsystem.
- the attribute information of the data to be written in the update request is saved, and the writing process of the data to be written is completed.
- all the data nodes need to write the data to be written into their corresponding distributed storage systems respectively.
- the node performs the process of writing data to the first distributed operating system, and the remaining data nodes utilize the data localization feature to view the shared distributed storage subsystem, which can reduce the network transmission and storage time of data between the various data nodes. In turn, the data writing speed is accelerated.
- FIG. 1 is a schematic structural diagram of an HDFS in the prior art
- FIG. 2 is a schematic structural diagram of an HDFS according to an embodiment of the present invention.
- FIG. 3 is a schematic flowchart of a data scheduling method according to an embodiment of the present disclosure
- FIG. 4 is a schematic flowchart diagram of another data scheduling method according to an embodiment of the present disclosure.
- FIG. 5 is a schematic diagram of a data node according to an embodiment of the present invention.
- FIG. 6 is a schematic diagram of another data node according to an embodiment of the present invention.
- the embodiment of the invention provides a file storage system, as shown in FIG. 2, comprising: a client 21 and a server group 22.
- the client includes a distributed file system module 211 and a file system data output stream (FSData OutputStream) module 212.
- the server farm employs a master-slave structure, including a name node 221, a primary data node 222, and at least one backup data node 223; wherein the primary data node and the at least one backup data node share a first distributed storage subsystem 224, the first distribution
- the storage subsystem 224 includes a primary storage device 2241 and at least one backup storage device 2242.
- each node in the server group can be equivalent to one server in physical implementation.
- the distributed storage subsystem is provided to each data node in the form of a virtual device, which is embodied as a virtual disk on each data node, and the read and write to the distributed subsystem is similar to the reading and writing of the local physical disk.
- the distributed subsystem includes multiple physical storage devices, for example, multiple hard disks; data between multiple physical storage devices can be backed up to each other.
- the first distributed storage subsystem is capable of sharing between various data nodes.
- the operation authority of each data node to the first distributed storage subsystem may be unrestricted, and special provisions may also be made.
- the operation authority of the primary data node 222 to the first distributed storage subsystem 224 is to allow read and write operations; and the operation of the backup data node 223 to the first distributed storage subsystem 224 Permissions are only allowed for read operations.
- the restored data node Since a plurality of data nodes share the first distributed storage subsystem, when a data node fails, only the system file of the failed data node needs to be restored, the restored data node is obtained, and then the first distributed sub-node is The system is mounted to the restored data node to implement data recovery in the first distributed storage subsystem under the data node, without using data replication to recover data, thereby improving data recovery efficiency.
- each data node may separately correspond to the distributed storage subsystem, respectively, with respect to the first distribution.
- the storage subsystem can be shared by all data nodes.
- the distributed storage subsystem corresponding to a data node has only read and write permissions to the data node itself.
- an embodiment of the present invention provides a data scheduling method, as described in FIG. 3, the method includes:
- the primary data node receives a write operation instruction sent by the client.
- the write operation instruction includes data to be written.
- the primary data node writes the to-be-written data into the first distributed storage subsystem.
- the primary data node after receiving the data to be written, forwards the data to be written to the distributed storage subsystem, triggering the write operation process of the distributed storage subsystem, and the distributed storage subsystem.
- the data to be written is written to the primary physical disk, and the copy request is sent to the backup disk.
- the backup disk then copies the data of the primary physical disk and saves the data to backup the backup disk and the primary physical disk.
- the primary data node sends a notification message to the name node, and sends a response message to the client.
- the primary data node After the primary data node successfully writes the data to the distributed storage subsystem, indicating that the data is written, the primary data node sends a notification message to the name node to inform the name node that the data to be written has been written, and sends a response message to the client. In order to inform the client that the data has been written according to the command sent by the client.
- the primary data node sends an update request to the first backup data node.
- the primary data node After the primary data node writes the data into the distributed storage system, it sends a write request to the first backup data node, where the write request carries the data to be written, and the first backup data node receives the write request. After that, the data to be written carried in the request needs to be written to an independent one corresponding to itself. In the distributed storage subsystem, this completes the write data process of the first backup data node.
- the update request includes only the storage location of the data to be written in the first distributed storage subsystem and the attribute information of the data to be written, such as the name and size of the data to be written, and the like.
- the information contained in the update request is equivalent to the index information of the data to be written, and does not include the data to be written itself.
- the first backup data node can find the data to be written from the corresponding location according to the update request, and the process of data transmission between the primary data node and the first backup data node, and the first backup data are omitted.
- the node will write the data to be written for the entire process.
- the backup data node receives the update request.
- the update request includes a storage location of the data to be written in the first distributed storage subsystem and attribute information of the data to be written.
- the backup data node referred to in this step includes the first backup data node referred to in the foregoing steps.
- the file storage system may be in addition to the primary data node and the first backup data node. Also included are a second backup data node, a third backup data node, and the like.
- the pipeline processing process is used, and after the previous data node is written, an update request is sent to the next data node. For example, after the primary data node writes the data to be written into the distributed storage system, the primary backup data node sends an update request to the first backup data node; after the first backup data node completes the data write, the update request is sent to the second backup data node. So on and so forth.
- the backup data node searches for the data to be written in the first distributed storage subsystem according to the storage location of the data to be written in the first distributed storage subsystem and the attribute information of the data to be written.
- the backup data node saves the attribute information of the data to be written.
- the backup data node sends a notification message to the name node, and sends a response message to the client.
- the backup data node only needs to save the attribute information of the data to be written, and the writing process of the data to be written is completed. After the backup data node completes the data writing, it sends a notification message to the name node to send a response message to the client.
- a plurality of data nodes share a distributed storage subsystem, and the distributed storage subsystem includes a primary storage device and at least one backup storage device, which can implement data between the storage devices. Back up each other.
- the primary data node writes the data to be written into the first distributed storage subsystem, and then sends an update request to the first backup data node to notify the first backup data node that the data is to be written.
- the attribute information and the storage location of the data to be written, the first backup data node only needs to view the first distributed storage subsystem according to the update request, and determines that the data to be written has been written in the first distributed storage subsystem.
- the attribute information of the data to be written in the update request is saved, and the writing process of the data to be written is completed.
- all the data nodes need to write the data to be written into their corresponding distributed storage systems, so that the data scheduling method provided by the present invention has only one data node, but only one
- the data node performs the process of writing data to the first distributed operating system, and the remaining data nodes utilize the data localization feature to view the shared distributed storage subsystem, which can reduce the network transmission and storage time of data between the various data nodes. , thereby speeding up data writing speed.
- the system implements the local request data, which can reduce the amount of data transmitted between the data nodes and reduce the transmission overhead; in addition, since only the first data node performs the reading and writing of the data to be written and the data to be written is saved in the first In a distributed storage system, the remaining data nodes do not need to save the data to be written, so the storage space occupied can be reduced, thereby saving the storage overhead of the server group.
- the embodiment of the present invention further provides a specific implementation process of data scheduling, as shown in FIG. 4, including:
- the client's DistributedFileSystem module initiates an RPC request to the name node.
- the Remote Procedure Call Protocol (RPC) request is used to create a new file in the file system namespace.
- the name node creates a new file after receiving the RPC request.
- the name node first checks whether the file to be created already exists, and whether the creator has permission to operate. If the file to be created does not exist and the creator has permission to perform the operation, the name will be executed. Steps and subsequent steps; otherwise the client will throw an exception and end the process of reading and writing files.
- the following steps 403 to 407 are data writing processes.
- the client's DFSOutputStream module divides the data into blocks, writes them into a data queue, and notifies the name node to allocate data nodes.
- the Data queue is read by the Data Streamer submodule in the DFSOutputStream module.
- the data node is used to store data blocks, and the assigned data nodes are placed in a pipeline.
- the client's Data Streamer sub-module writes the data block to the primary data node in the pipeline.
- the primary data node is the first data node in the pipeline.
- the client's DFSOutputStream module saves the ack queue for the sent data block, waiting for each data node in the pipeline to inform that the data has been successfully written.
- the primary data node triggers a write process of the distributed storage subsystem.
- data is first written to the primary physical disk, and a replication request is sent to the standby disk, so that multiple backup data (3 copies by default) is written on the distributed storage device.
- the primary data node no longer needs to send a data block to the backup data node, but only sends an update request, thereby entering the process of “update layer”.
- the backup data node referred to in this step is the first backup data node, and the second backup data node is also shown in FIG. 4, and the first backup data node completes the data write operation. After that, an update request is also sent to the second backup data node.
- update layer may include other processing procedures in addition to the processing of the following steps.
- the backup data node After receiving the update request, the backup data node refreshes the shared distributed storage subsystem according to the content of the message in the update request, and when the information of the data block is read, the attribute information of the data block is saved, that is, the data block is completed.
- the writing process when the writing of the data block is completed, a notification message is sent to the name node, and a response message is returned to the client.
- akb quene removes the corresponding data packet.
- Data Streamer will brush the remaining data packets into the pipeline and wait for the ack information. After receiving the last ack, the metadata node is notified to complete the writing.
- the client When the client completes the write operation of all data blocks, it calls the stream's close method to close the write stream.
- an embodiment of the present invention provides a data node, including:
- the receiving module 501 is configured to receive a write operation instruction sent by the client, where the write operation instruction includes data to be written.
- Write circuit 502 configured to write the data to be written into the first distributed storage subsystem
- the sending module 503 is configured to send a notification message to the name node, and send a response message to the client.
- the attribute information of the data to be written includes a name and a size of the data to be written.
- the embodiment of the invention further provides a data node, as shown in FIG. 6, comprising:
- the receiving module 601 is configured to receive an update request, where the update request includes a storage location of the to-be-written data in the first distributed storage subsystem and attribute information of the to-be-written data.
- the processing module 602 is configured to search, in the first distributed storage subsystem, according to the storage location of the to-be-written data in the first distributed storage subsystem and the attribute information of the data to be written. The data to be written.
- the storage module 603 is configured to: when the data to be written is found, the backup data node protects The attribute information of the data to be written is stored.
- the sending module 604 is configured to send a notification message to the name node, and send a response message to the client.
- a plurality of data nodes share a distributed storage subsystem, and the distributed storage subsystem includes a primary storage device and at least one backup storage device, which can implement data between the storage devices. Back up each other.
- the primary data node writes the data to be written into the first distributed storage subsystem, and then sends an update request to the first backup data node to notify the first backup data node that the data is to be written.
- the attribute information and the storage location of the data to be written, the first backup data node only needs to view the first distributed storage subsystem according to the update request, and determines that the data to be written has been written in the first distributed storage subsystem.
- the attribute information of the data to be written in the update request is saved, and the writing process of the data to be written is completed.
- all the data nodes need to write the data to be written into their corresponding distributed storage systems, so that the data scheduling method provided by the present invention has only one data node, but only one
- the data node performs the process of writing data to the first distributed operating system, and the remaining data nodes utilize the data localization feature to view the shared distributed storage subsystem, which can reduce the network transmission and storage time of data between the various data nodes. , thereby speeding up data writing speed.
- the present invention can be implemented by means of software plus necessary general hardware, and of course, by hardware, but in many cases, the former is a better implementation. .
- the technical solution of the present invention which is essential or contributes to the prior art, can be embodied in the form of a software product stored in a readable storage medium, such as a floppy disk of a computer.
- a hard disk or optical disk, etc. includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
La présente invention se rapporte au domaine des systèmes de fichiers. L'invention concerne un système de mémorisation de fichiers, un procédé de programmation de données et un nœud de données. Le procédé de programmation de données consiste à : recevoir, au moyen d'un nœud de données primaire, une instruction d'opération d'écriture envoyée par un client, l'instruction d'opération d'écriture comprenant des données à écrire ; écrire, au moyen du nœud de données primaire, les données à écrire dans un premier sous-système de mémorisation distribué, envoyer un message de notification à un nœud de nom, puis envoyer un message de réponse au client ; et envoyer, au moyen du nœud de données primaire, une demande de mise à jour à un premier nœud de données de sauvegarde, la demande de mise à jour comprenant une position de mémorisation des données à écrire dans le premier sous-système de mémorisation distribué et des informations d'attribut des données à écrire. La présente invention est appliquée à un processus de mémorisation d'un fichier.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510922155.5A CN106873902B (zh) | 2015-12-11 | 2015-12-11 | 一种文件存储系统、数据调度方法及数据节点 |
CN201510922155.5 | 2015-12-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017096942A1 true WO2017096942A1 (fr) | 2017-06-15 |
Family
ID=59012648
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/095532 WO2017096942A1 (fr) | 2015-12-11 | 2016-08-16 | Système de mémorisation de fichiers, procédé de programmation de données et nœud de données |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106873902B (fr) |
WO (1) | WO2017096942A1 (fr) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019000423A1 (fr) * | 2017-06-30 | 2019-01-03 | 华为技术有限公司 | Procédé et dispositif de mémorisation de données |
CN109358813B (zh) * | 2018-10-10 | 2022-03-04 | 郑州云海信息技术有限公司 | 一种分布式存储系统的扩容方法及装置 |
CN111881107B (zh) * | 2020-08-05 | 2022-09-06 | 北京计算机技术及应用研究所 | 支持多文件系统挂载的分布式存储方法 |
CN114024979A (zh) * | 2021-10-25 | 2022-02-08 | 深圳市高德信通信股份有限公司 | 一种分布式边缘计算数据存储系统 |
CN115826879B (zh) * | 2023-02-14 | 2023-05-23 | 北京派网软件有限公司 | 分布式存储系统中存储节点的数据更新方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070156964A1 (en) * | 2005-12-30 | 2007-07-05 | Sistla Krishnakanth V | Home node aware replacement policy for caches in a multiprocessor system |
CN101741911A (zh) * | 2009-12-18 | 2010-06-16 | 中兴通讯股份有限公司 | 基于多副本协同的写操作方法、系统及节点 |
CN104598568A (zh) * | 2015-01-12 | 2015-05-06 | 浪潮电子信息产业股份有限公司 | 一种高效、低功耗的离线存储系统及方法 |
CN104917788A (zh) * | 2014-03-11 | 2015-09-16 | 中国移动通信集团公司 | 一种数据存储方法及装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011157156A2 (fr) * | 2011-06-01 | 2011-12-22 | 华为技术有限公司 | Procédé et dispositif d'exploitation pour système de stockage de données |
CN103853612A (zh) * | 2012-12-04 | 2014-06-11 | 中山大学深圳研究院 | 一种基于分布式存储下的数字家庭内容读数据的方法 |
CN103714014B (zh) * | 2013-11-18 | 2016-12-07 | 华为技术有限公司 | 处理缓存数据的方法及装置 |
-
2015
- 2015-12-11 CN CN201510922155.5A patent/CN106873902B/zh active Active
-
2016
- 2016-08-16 WO PCT/CN2016/095532 patent/WO2017096942A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070156964A1 (en) * | 2005-12-30 | 2007-07-05 | Sistla Krishnakanth V | Home node aware replacement policy for caches in a multiprocessor system |
CN101741911A (zh) * | 2009-12-18 | 2010-06-16 | 中兴通讯股份有限公司 | 基于多副本协同的写操作方法、系统及节点 |
CN104917788A (zh) * | 2014-03-11 | 2015-09-16 | 中国移动通信集团公司 | 一种数据存储方法及装置 |
CN104598568A (zh) * | 2015-01-12 | 2015-05-06 | 浪潮电子信息产业股份有限公司 | 一种高效、低功耗的离线存储系统及方法 |
Also Published As
Publication number | Publication date |
---|---|
CN106873902B (zh) | 2020-04-28 |
CN106873902A (zh) | 2017-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220239602A1 (en) | Scalable leadership election in a multi-processing computing environment | |
WO2017096942A1 (fr) | Système de mémorisation de fichiers, procédé de programmation de données et nœud de données | |
US9600553B1 (en) | Distributed replication in cluster environments | |
US11481139B1 (en) | Methods and systems to interface between a multi-site distributed storage system and an external mediator to efficiently process events related to continuity | |
US9934242B2 (en) | Replication of data between mirrored data sites | |
JP6225262B2 (ja) | 分散データグリッドにおいてデータを同期させるためにパーティションレベルジャーナリングをサポートするためのシステムおよび方法 | |
US10831741B2 (en) | Log-shipping data replication with early log record fetching | |
EP2821925B1 (fr) | Procédé et appareil de traitement de données distribuées | |
CN107402722B (zh) | 一种数据迁移方法及存储设备 | |
WO2016070375A1 (fr) | Système et procédé de réplication de stockage distribué | |
US20170168756A1 (en) | Storage transactions | |
US10140194B2 (en) | Storage system transactions | |
WO2018054079A1 (fr) | Procédé de stockage d'un fichier, première machine virtuelle et nœud de nom | |
WO2016078420A1 (fr) | Procédé de traitement par machine virtuelle et système informatique virtuel | |
WO2020025049A1 (fr) | Procédé et appareil de synchronisation de données, hôte de base de données et support de stockage | |
CN106528338B (zh) | 一种远程数据复制方法、存储设备及存储系统 | |
US11086551B2 (en) | Freeing and utilizing unused inodes | |
US9110820B1 (en) | Hybrid data storage system in an HPC exascale environment | |
US20200301587A1 (en) | Persistent hole reservation | |
US11321283B2 (en) | Table and index communications channels | |
WO2018157605A1 (fr) | Procédé et dispositif de transmission de messages dans un système de fichiers en grappe | |
CN104965835B (zh) | 一种分布式文件系统的文件读写方法及装置 | |
CN106855869B (zh) | 一种实现数据库高可用的方法、装置和系统 | |
WO2015196692A1 (fr) | Système informatique en nuage, et procédé et appareil de traitement pour un système informatique en nuage | |
CN106326030B (zh) | 用于存储系统中的软切换的方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16872138 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16872138 Country of ref document: EP Kind code of ref document: A1 |