CN113296714A - Data storage system based on NAS protocol - Google Patents

Data storage system based on NAS protocol Download PDF

Info

Publication number
CN113296714A
CN113296714A CN202110674904.2A CN202110674904A CN113296714A CN 113296714 A CN113296714 A CN 113296714A CN 202110674904 A CN202110674904 A CN 202110674904A CN 113296714 A CN113296714 A CN 113296714A
Authority
CN
China
Prior art keywords
nas
data
node
target
nas node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110674904.2A
Other languages
Chinese (zh)
Other versions
CN113296714B (en
Inventor
刘志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision System Technology Co Ltd
Original Assignee
Hangzhou Hikvision System Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision System Technology Co Ltd filed Critical Hangzhou Hikvision System Technology Co Ltd
Priority to CN202110674904.2A priority Critical patent/CN113296714B/en
Publication of CN113296714A publication Critical patent/CN113296714A/en
Application granted granted Critical
Publication of CN113296714B publication Critical patent/CN113296714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Abstract

The embodiment of the invention provides a data storage system based on an NAS (network attached storage) protocol, relating to the technical field of data storage. The system comprises: the NAS management system comprises a first NAS node, at least one second NAS node, an NAS management node and a cloud storage subsystem; the first NAS node sends the first storage path to an NAS management node; the NAS management node determines a first target NAS node from at least one second NAS node and feeds back first node information of the first target NAS node to the first NAS node; the first NAS node reports first metadata information of a data file to be stored to a first target NAS node based on the first node information; the first target NAS node stores first metadata information; the first NAS node stores the data file to be stored to a cloud storage subsystem, and reports metadata updating information to a first target NAS node; the first target NAS node updates the first metadata information. Compared with the prior art, the scheme provided by the embodiment of the invention can reduce the complexity of the expansion process of the original data system.

Description

Data storage system based on NAS protocol
Technical Field
The invention relates to the technical field of data storage, in particular to a data storage system based on an NAS protocol.
Background
Currently, with the continuous development of data storage technology, the data volume of data files to be stored is larger and larger, so that more and more huge data files to be stored appear. For example, in an actual video recording service scenario, a video capture device at the front end may continuously write the captured video stream into a very large data file.
However, in the process of storing the huge data files, due to the limitation of the storage capacity of the storage device, the storage device in many data storage systems cannot meet the storage requirement of the huge data files.
In the related art, in order to meet the storage requirement for an oversized data file, a new data storage system which can be expanded at any time and can meet the performance and security requirements is virtualized at the bottom layer of the original data storage system on the basis of the original data storage system, so as to expand the original data storage system. The bottom layer of the original data storage system refers to a storage device in the original data storage system, wherein the storage device is used for storing data in the super-large data file.
However, in the related art, in order to implement the read/write operation between the new data storage system and the original data storage system, an additional development work is required to customize an Application Programming Interface (API) that can Interface the new data storage system and the original data storage system, so that the process of expanding the original data storage system is complicated.
Disclosure of Invention
The embodiment of the invention aims to provide a data storage system based on an NAS protocol so as to reduce the complexity of the process of expanding the original data system. The specific technical scheme is as follows:
the embodiment of the invention provides a data storage system based on NAS protocol, which comprises: the system comprises a first network attached storage NAS node, at least one second NAS node, an NAS management node and a cloud storage subsystem;
the first NAS node is used for receiving a first storage path and a data file to be stored, which are sent by a first user side, and sending the first storage path to the NAS management node;
the NAS management node is configured to determine, from the at least one second NAS node, a first target NAS node corresponding to the received first storage path based on a correspondence between a preset storage path and the second NAS node, and feed back first node information of the first target NAS node to the first NAS node;
the first NAS node is further configured to report, to the first target NAS node, first metadata information of the data file to be stored based on the received first node information, so that the first target NAS node stores the received first metadata information; storing the data file to be stored in the cloud storage subsystem, and reporting metadata updating information to the first target NAS node after the data file to be stored is stored, so that the first target NAS node updates the stored first metadata information by using the received metadata updating information;
the cloud storage subsystem is configured to store the data file to be stored, which is sent by the first NAS node.
Alternatively, in one particular implementation,
the first NAS node is further configured to receive a second storage path and file information of a data file to be read, which are sent by a second user side, and send the second storage path to the NAS management node;
the NAS management node is further configured to determine, based on the correspondence, a second target NAS node corresponding to the received second storage path from the at least one second NAS node, and feed back second node information of the second target NAS node to the first NAS node;
the first NAS node is further configured to obtain, based on the received second node information, second metadata information matched with the file information from the second target NAS node, read the data file to be read from the cloud storage subsystem based on the second metadata information, and feed back the obtained data file to be read to the second user end;
the cloud storage subsystem is further configured to send the data file to be read to the first NAS node.
Optionally, in a specific implementation manner, the data file to be stored includes: a plurality of sub data files, wherein each sub data file is sent by the first user end in each IO interaction process; the first NAS node stores the data file to be stored to the cloud storage subsystem, including:
aggregating the data in each subdata file into each first data block according to the offset address of each subdata file in the data file to be stored; the data volume of each first data block is a preset first data volume;
according to the sequence of the aggregation time of each first data block from early to late, each first data block is sent to a preset asynchronous write cache queue;
and according to the first-in first-out sequence, asynchronously storing each first data block in the asynchronous write cache queue to the cloud storage subsystem.
Optionally, in a specific implementation manner, each first data block carries an offset address of the first data block in the data to be stored; the asynchronous storage of each first data block in the asynchronous write cache queue to the cloud storage subsystem by the first NAS node includes:
determining a storage object used for storing the first data block in the cloud storage subsystem according to the offset address carried by each first data block and a preset fragmentation size, and storing the first data block into the determined storage object; wherein the slice size is: and when the data file to be stored is stored in the cloud storage subsystem according to fragments, the size of each fragment is large.
Optionally, in a specific implementation manner, the reading, by the first NAS node, the to-be-read data file from the cloud storage subsystem based on the second metadata information, and feeding back the obtained to-be-read data file to the second user side includes:
determining each second data block containing the data file to be read in the cloud storage subsystem based on the second metadata information; the quantity of each second data block is a preset second data quantity;
sending a target data block including the initial data of the data file to be read in each second data block to a preset synchronous read cache, and feeding back the data in the target data block to the second user end;
according to the offset address of the included data in the cloud storage subsystem, sending other data blocks except the target data block in the second data blocks to a preset asynchronous read cache queue;
and after the data in the target data block is fed back, sending the first second data block in the asynchronous read cache to the synchronous read cache as a new target data block according to a first-in first-out sequence, and returning to the step of feeding back the data in the target data block to the second user side until a data reading stop condition is met.
Optionally, in a specific implementation manner, the feeding back, by the first NAS node, the data in the target data block to the second user end includes:
acquiring a target offset address and a target data volume of data requested by an interface between the first NAS node and the second user end;
reading target data from the target data block according to the target offset address and the target data volume, and feeding the target data back to the second user end; and returning to the step of acquiring the target offset address and the target data volume of the data requested by the interface between the first NAS node and the second user end.
Optionally, in a specific implementation manner, the data reading stop condition includes:
the first NAS node receives a data reading stopping instruction sent by the second user end; alternatively, the first and second electrodes may be,
and feeding back all data in the data file to be read to the second user terminal.
Optionally, in a specific implementation manner, the first NAS node is mounted with a first database for storing the corresponding relationship; each second NAS node hosts a second database for storing metadata information for the data.
Optionally, in a specific implementation manner, the first NAS node includes: a first NAS master device and at least one first NAS standby device; each first NAS standby device is used for upgrading to a new first NAS main device when the first NAS main device fails;
each second NAS node comprises: a second NAS master device and at least one second NAS slave device; and the first NAS standby equipment is used for upgrading the second NAS main equipment to new second NAS main equipment when the second NAS main equipment fails.
The embodiment of the invention has the following beneficial effects:
as can be seen from the above, by applying the scheme provided in the embodiment of the present invention, a data Storage system based on an NAS (Network Attached Storage) protocol can be established. The system can comprise a first NAS node, at least one second NAS node, a NAS management node and a cloud storage subsystem. The NAS management node stores a correspondence between a storage path of each user side and each second NAS node, and each NAS node stores metadata information of a data file sent by the user side corresponding to the NAS management node, and further, data in each data file is stored in the cloud storage subsystem.
Therefore, when the data file to be stored sent by the first user side is stored, the first NAS node can determine a second NAS node used for storing metadata information of the data file to be stored through the NAS management node, and further, after the metadata information of the data file to be stored is reported to the second NAS node, the data file to be stored can be stored in the cloud storage subsystem, and the metadata updating information is reported to the second NAS node, so that the second NAS node updates the stored metadata information of the data file to be stored, and therefore, the storage of the data file to be stored sent by the first user side can be completed.
Based on this, by applying the scheme provided by the embodiment of the invention, the user side and the cloud storage subsystem can be docked by using the NAS node through the NAS protocol, so that the cloud storage subsystem is provided for the user side to use in a network sharing mode. The cloud storage subsystem can be expanded according to the data storage requirement, so that the cloud storage subsystem can meet the storage requirement of the ultra-large data file, and the user side and the cloud storage subsystem can be connected in a butt joint mode through the NAS protocol by the NAS node, so that the standard NAS protocol can be directly used.
That is to say, a data storage system comprising an NAS node and a cloud storage subsystem can be constructed directly by using a standard NAS protocol without additionally developing an API to expand an original data storage system, so that an improvement process of the original data storage system is greatly simplified, and the improved original data system can meet storage requirements of oversized data files.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a data storage system based on an NAS protocol according to an embodiment of the present invention;
fig. 2 is a signaling interaction diagram of a data storage system based on an NAS protocol in a data storage process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of one embodiment of S207 of FIG. 2;
fig. 4 is a signaling interaction diagram of a data storage system based on an NAS protocol in a data reading process according to an embodiment of the present invention;
FIG. 5 is a diagram of one embodiment of the implementation shown in FIG. 4. .
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments given herein by one of ordinary skill in the art, are within the scope of the invention.
In the related art, in order to meet the storage requirement for an oversized data file, a new data storage system which can be expanded at any time and can meet the performance and security requirements is virtualized at the bottom layer of the original data storage system on the basis of the original data storage system, so as to expand the original data storage system. The bottom layer of the original data storage system refers to a storage device in the original data storage system, wherein the storage device is used for storing data in the super-large data file. However, in the related art, in order to implement the read/write operation between the new data storage system and the original data storage system, an additional development work is required to customize an Application Programming Interface (API) that can Interface the new data storage system and the original data storage system, so that the process of expanding the original data storage system is complicated.
In order to solve the above technical problem, an embodiment of the present invention provides a data storage system based on a NAS protocol. The data storage system includes: the system comprises a first network attached storage NAS node, at least one second NAS node, an NAS management node and a cloud storage subsystem;
the first NAS node is used for receiving a first storage path and a data file to be stored, which are sent by a first user side, and sending the first storage path to the NAS management node;
the NAS management node is configured to determine, from the at least one second NAS node, a first target NAS node corresponding to the received first storage path based on a correspondence between a preset storage path and the second NAS node, and feed back first node information of the first target NAS node to the first NAS node;
the first NAS node is further configured to report, to the first target NAS node, first metadata information of the data file to be stored based on the received first node information, so that the first target NAS node stores the received first metadata information; storing the data file to be stored in the cloud storage subsystem, and reporting metadata updating information to the first target NAS node after the data file to be stored is stored, so that the first target NAS node updates the stored first metadata information by using the received metadata updating information;
the cloud storage subsystem is configured to store the data file to be stored, which is sent by the first NAS node.
As can be seen from the above, by applying the scheme provided in the embodiment of the present invention, a data Storage system based on a Network Attached Storage (NAS) protocol can be established. The system can comprise a first NAS node, at least one second NAS node, a NAS management node and a cloud storage subsystem. The NAS management node stores a correspondence between a storage path of each user side and each second NAS node, and each NAS node stores metadata information of a data file sent by the user side corresponding to the NAS management node, and further, data in each data file is stored in the cloud storage subsystem.
Therefore, when the data file to be stored sent by the first user side is stored, the first NAS node can determine a second NAS node used for storing metadata information of the data file to be stored through the NAS management node, and further, after the metadata information of the data file to be stored is reported to the second NAS node, the data file to be stored can be stored in the cloud storage subsystem, and the metadata updating information is reported to the second NAS node, so that the second NAS node updates the stored metadata information of the data file to be stored, and therefore, the storage of the data file to be stored sent by the first user side can be completed.
Based on this, by applying the scheme provided by the embodiment of the invention, the user side and the cloud storage subsystem can be docked by using the NAS node through the NAS protocol, so that the cloud storage subsystem is provided for the user side to use in a network sharing mode. The cloud storage subsystem can be expanded according to the data storage requirement, so that the cloud storage subsystem can meet the storage requirement of the ultra-large data file, and the user side and the cloud storage subsystem can be connected in a butt joint mode through the NAS protocol by the NAS node, so that the standard NAS protocol can be directly used.
That is to say, a data storage system comprising an NAS node and a cloud storage subsystem can be constructed directly by using a standard NAS protocol without additionally developing an API to expand an original data storage system, so that an improvement process of the original data storage system is greatly simplified, and the improved original data system can meet storage requirements of oversized data files.
Hereinafter, a data storage system based on NAS protocol according to an embodiment of the present invention will be specifically described with reference to the accompanying drawings.
Fig. 1 is a schematic structural diagram of a data storage system based on a NAS protocol according to an embodiment of the present invention, and as shown in fig. 1, the system includes: a first NAS node 101, at least one second NAS node 102, a NAS management node 103 and a cloud storage subsystem 104.
The NAS management node 103 stores a corresponding relationship between a preset storage path and a second NAS node.
For each user side, a folder for storing data files is preset in the user side, so that when the user side acquires the data files to be stored, the data files to be stored are firstly stored in the folder, and then the data files to be stored are sent to the first NAS node from the folder.
For the folder, a storage path related to the folder exists in the user side, and after the user side acquires the data file to be stored, the data file to be stored is stored into the folder according to the storage path.
Since the metadata information of the data file to be stored sent by different user terminals is stored in different second NAS nodes, a corresponding relationship between the storage path of each user terminal and the second NAS node can be established in the NAS management node, and the corresponding relationship can be characterized as follows: and aiming at each user side, the second NAS node is used for storing the metadata information of the file to be stored sent by the user side.
The storage path of each user side may be referred to as a directory of each user side, so that the NAS management node may store a preset corresponding relationship between the directory and the second NAS node.
Optionally, in a specific implementation manner, the first NAS node may be configured to mount a first database, and further, the first database may be configured to store a corresponding relationship between the preset storage path and the second NAS node.
For example, the first database may be a Maridb database, and the database stores the correspondence between the preset storage path and the second NAS node.
Optionally, in another specific implementation manner, the first NAS node may be a node group formed by a plurality of devices, where the first NAS node includes: the first NAS master device and the at least one first NAS backup device. In this way, when the first NAS master device is operating normally, the first NAS master device is configured to complete data storage and read operations, and when the first NAS master device fails, in order to ensure normal data storage and read operations, any one of the at least one first NAS slave device may be upgraded to a new first NAS master device, so that the upgraded new first NAS master device is configured to complete data storage and read operations.
When the first NAS master device fails, it may be that one first NAS master device is upgraded to a new first NAS master device at random; the first NAS device with the highest priority may be upgraded to the new first NAS master device according to the priority of each preset first NAS device. Of course, one first NAS standby device may be upgraded to a new first NAS master device according to other rules.
Each second NAS node is configured to store metadata information of a data file to be stored, which is sent by a user side having a storage path corresponding to the second NAS node. That is, each second NAS node corresponds to a storage path, and the metadata information of the data file to be stored sent by the user side having the storage path can be stored in the second NS node.
Optionally, in a specific implementation manner, each second NAS node may be configured to mount a second database, and further, the second database may be configured to store metadata information of the data file to be stored, where the metadata information is sent by the user side of the storage path corresponding to the second NAS node.
For example, the second database may be a Maridb database, in which a metadata file of a file to be stored sent by each user terminal is stored.
Optionally, in another specific implementation manner, each second NAS node may be a node group formed by a plurality of devices, where the second NAS node includes: a second NAS master device and at least one second NAS slave device. In this way, when the second NAS master device is operating normally, the first NAS master device is configured to complete data storage and read operations, and when the second NAS master device fails, in order to ensure normal data storage and read operations, any one of the at least one second NAS slave device may be upgraded to a new second NAS master device, so that the upgraded new second NAS master device is configured to complete data storage and read operations.
When the second NAS master device fails, it may be that one second NAS master device is upgraded to a new second NAS master device at random; the second NAS device apparatus with the highest priority may be upgraded to a new second NAS master apparatus according to the priority of each preset second NAS device apparatus. Of course, one second NAS standby device may be upgraded to a new second NAS master device according to other rules.
The cloud storage subsystem is used for storing data in data files to be stored, which are sent by each user side, and the cloud storage subsystem may include a plurality of storage devices, such as magnetic disks. And the storage equipment in the cloud storage subsystem can be expanded according to the requirement of data storage. For example, a storage device is added to the cloud storage subsystem. The cloud storage subsystem may be any cloud storage system, and the embodiments of the present invention are not limited in this respect.
Hereinafter, a data storage process of a data storage system based on the NAS protocol according to an embodiment of the present invention is specifically described.
Wherein, in the data storage process:
the first NAS node 101 is configured to receive a first storage path and a data file to be stored, which are sent by a first user end, and send the first storage path to the NAS management node 103;
the NAS management node 103 is configured to determine, from the at least one second NAS node 102, a first target NAS node 102 corresponding to the received first storage path based on a correspondence between a preset storage path and the second NAS node 102, and feed back first node information of the first target NAS node 102 to the first NAS node 101;
the first NAS node 101 is further configured to report, to the first target NAS node 102, first metadata information of a data file to be stored based on the received first node information, so that the first target NAS node 102 stores the received first metadata information; storing the data file to be stored in the cloud storage subsystem 104, and reporting metadata update information to the first target NAS node 102 after the data file to be stored is stored, so that the first target NAS node 102 updates the stored first metadata information by using the received metadata update information;
the cloud storage subsystem 104 is configured to store the data file to be stored, which is sent by the first NAS node 101.
To facilitate understanding of the data storage process of the data storage system based on the NAS protocol provided in the foregoing embodiment of the present invention, as shown in fig. 2, a signaling interaction diagram of the data storage system based on the NAS protocol provided in the embodiment of the present invention in the data storage process is shown.
S201: the first NAS node 101 receives a first storage path and a data file to be stored, which are sent by a first user end;
after acquiring the data file to be stored, the first user side may store the data file to be stored in a preset folder, and determine a first storage path that is pre-constructed and is related to the folder. Further, the first user end may send the first storage path and the data file to be stored to the first NAS node 101. In this way, the first NAS node 101 may receive the first storage path and the data file to be stored, which are sent by the first user end.
Optionally, the first user end may send the first storage path and the data file to be stored to the first NAS node 101 by using a specified protocol applicable to the first NAS node 101 through a pre-installed NAS client. Wherein, the specified protocol may be: a CIFS (Common Internet File System) Protocol, an NFS (Network File System) Protocol, an FTP (File Transfer Protocol) Protocol, and the like support a Protocol for the NAS client in the first user equipment to communicate with the first NAS node 101.
S202: the first NAS node 101 sends the first storage path to the NAS management node 103;
since the metadata information of the data file to be stored sent by different user terminals is stored in different second NAS nodes 102, when the data file to be stored sent by the first user terminal is stored, the second NAS node 102 for storing the metadata information of the data file to be stored needs to be determined. Furthermore, since the correspondence relationship between the preset storage path and the second NAS node 102 is stored in the NAS management node 103, the first NAS node 101 may request the NAS management node 103 for the second NAS node 102 corresponding to the first storage path.
Based on this, after receiving the first storage path and the data file to be stored, the first NAS node 101 may send the first storage path to the NAS management node 103.
S203: the NAS management node 103 determines, from the at least one second NAS node 102, a first target NAS node 102 corresponding to the received first storage path based on a correspondence between preset storage paths and the second NAS nodes 102;
after receiving the first storage path, the NAS management node 103 may search the second NAS node 102 corresponding to the first storage path in the preset correspondence between the storage path and the second NAS node 102, and may further determine the searched second NAS node 102 as the first target NAS node 102 corresponding to the received first storage path.
S204: the NAS management node 103 feeds back the first node information of the first target NAS node 102 to the first NAS node 101;
after determining the first target NAS node 102 corresponding to the received first storage path, the NAS management node 103 may feed back the first node information of the first target NAS node 102 to the first NAS node 101, so that the first NAS node 101 may report the metadata information of the data file to be stored to the first target NAS node 102. In this way, the first target NAS node 102 may store the metadata information of the data file to be stored.
Optionally, the first node information of the first target NAS node 102 may include: an identification of the first target NAS node 102, an offset address of available storage space of the first target NAS node 102, and the like. In this regard, the embodiment of the present invention is not particularly limited as long as the first NAS node 101 can determine the first target NAS node 102 in the at least one second NAS node 102 according to the first node information.
S205: the first NAS node 101 reports first metadata information of a data file to be stored to the first target NAS node 102 based on the received first node information;
after receiving the first node information of the first target NAS node 102, the first NAS node 101 may report the first metadata information of the data file to be stored to the first target NAS node 102.
Optionally, the first metadata information of the data file to be stored may include: and path information, bucket information, key value and other information of the data file to be stored. In this regard, the embodiment of the present invention is not particularly limited, as long as the data file to be stored can be determined in the cloud storage subsystem according to the first metadata information.
S206: the first target NAS node 102 stores the received first metadata information;
after receiving the first metadata information of the data file to be stored sent by the first NAS node 101, the first target NAS node 102 may store the first metadata information.
S207: the first NAS node 101 stores the data file to be stored to the cloud storage subsystem 104;
after sending the first metadata information of the data file to be stored to the first target NAS node 102, the first NAS node 101 may store the data file to be stored in the cloud storage subsystem 104.
Optionally, in a specific implementation manner, the data file to be stored includes: and each sub data file is sent by the first user end in each IO interaction process.
Accordingly, in this specific implementation manner, the step S207 may include the following steps a 1-A3:
step A1: aggregating the data in each subdata file into each first data block according to the offset address of each subdata file in the data file to be stored; the data volume of each first data block is a preset first data volume;
step A2: according to the sequence of the aggregation time of each first data block from early to late, each first data block is sent to a preset asynchronous write cache queue;
step A3: and according to the first-in first-out sequence, asynchronously storing each first data block in the asynchronous write cache queue to the cloud storage subsystem.
It can be understood that, because the write interface between the first user end and the first NAS node 101 has a certain traffic limit, the data in the data file to be stored is not sent from the first user end to the first NAS node 101 at one time, but the data file to be stored is divided into a plurality of sub data files for transmission according to the data amount that can be written by the write interface between the first user end and the first NAS node 101 each time. Furthermore, in each IO interaction process, the first user side sends one subdata file to the first NAS node 101. Wherein, the data volume of each subdata file is as follows: the amount of data that can be written at a time by the write interface between the first user side and the first NAS node 101.
In this way, after receiving each sub data file, in order to ensure that the ordering of each data in the data file to be stored in the cloud storage subsystem is the same as the ordering of each data in the data file to be stored acquired by the first user side, the first NAS node 101 may aggregate the data in each sub data file into each first data block according to the offset address of each sub data file in the data file to be stored. The data volumes of the first data blocks are the same and are preset first data volumes.
And then, the first data blocks can be sent to a preset asynchronous write cache queue according to the sequence of the aggregation time of the first data blocks from early to late. In this way, the first data blocks in the asynchronous write buffer queue are asynchronously stored in the cloud storage subsystem in the first-in first-out order.
Optionally, in an embodiment of the present specific implementation manner, each first data block carries an offset address of the first data block in the data to be stored. Based on this, the above step A3 may include the following step a 31:
step A31: determining a storage object used for storing the first data block in the cloud storage subsystem according to the offset address carried by each first data block and the preset fragment size, and storing the first data block into the determined storage object;
wherein, the fragment size is: and when the data file to be stored is stored in the cloud storage subsystem according to the fragments, the size of each fragment is large.
In this embodiment, each data file to be stored may be divided into a plurality of fragments to be stored in the cloud storage subsystem, where each fragment has a preset size, that is, a preset fragment size.
In this way, the first NAS node 101 may determine, by using the offset address carried by each first data block and the preset fragmentation size, a sequence number of a storage object used for storing the first data block, and thus store the first data block into the storage object having the determined sequence number.
Optionally, the sequence number of the storage object for storing each first data block may be: the ratio of the offset address carried by the first data block to a predetermined fragment size.
Illustratively, as shown in fig. 3, is a schematic diagram of a specific embodiment of the specific implementation manner shown in the above steps a 1-A3.
The first NAS node may cyclically call the pwrite interface in the fuse system installed in the first user end, and obtain subfiles with data volumes of 128k from the first user end. And in the IO shaping cache, according to the offset address of each sub data file in the data file to be stored, aggregating the data in each sub data file into a first data block with each data volume of 4M. Then, according to the sequence of the aggregation time of each first data block from early to late, each first data block is sent to a preset asynchronous write buffer queue, namely, to the asynchronous IO buffer queue in fig. 3, and according to the sequence of first in and first out, each first data block in the asynchronous write buffer queue is asynchronously stored in the cloud storage subsystem through the pwrite interface.
S208: after the data file to be stored is stored, the first NAS node 101 reports metadata update information to the first target NAS node 102;
after the data file to be stored is stored, the offset address of the data file to be stored in the cloud storage subsystem and the size of the data file to be stored may be determined, so that the offset address and the size will change what the first target NAS node 102 stores in the step S206.
Based on this, after the data file to be stored is stored, the first NAS node 101 may report the metadata update information to the first target NAS node 102.
Optionally, the storage of the data file to be stored is completed by: the time length of the first user terminal for receiving the data in the data file to be stored reaches the preset time length. That is, the data file to be stored is generated according to a preset cycle.
Optionally, the storage of the data file to be stored is completed by: and the storage space preset in the cloud storage subsystem and used for storing the data file to be stored sent by the first user side is completely occupied. That is, the sizes of the different data files to be stored sent by the first user terminal are the same.
S209: the first target NAS node 102 updates the stored first metadata information with the received metadata update information.
Upon receiving the metadata update information, the first target NAS node 102 may update the first metadata information stored in step S206 by using the received metadata update information.
At this point, a completed data storage process ends.
Hereinafter, a data reading process of a data storage system based on the NAS protocol according to an embodiment of the present invention is specifically described.
Wherein, in the data reading process:
the first NAS node 101 is further configured to receive a second storage path sent by the second user and file information of the data file to be read, and send the second storage path to the NAS management node 103;
the NAS management node 103 is further configured to determine, based on the correspondence, a second target NAS node 102 corresponding to the received second storage path from the at least one first NAS node 102, and feed back second node information of the second target NAS node 102 to the first NAS node 101;
the first NAS node 101 is further configured to obtain, based on the received second node information, second metadata information matched with the file information from the second target NAS node 102, read, based on the second metadata information, a data file to be read from the cloud storage subsystem 104, and feed back the obtained data file to be read to the second user end;
the cloud storage subsystem 104 is further configured to send the data file to be read to the first NAS node 101.
To facilitate understanding of the data reading process of the data storage system based on the NAS protocol according to the foregoing embodiment of the present invention, as shown in fig. 4, a signaling interaction diagram of the data storage system based on the NAS protocol in the data reading process according to the embodiment of the present invention is provided.
S401: the first NAS node 101 receives a second storage path sent by a second user end and file information of a data file to be read;
when a user using the second user wishes to read a certain data file in the cloud storage subsystem, file information of the data file to be read, for example, key value information of the read data file, may be sent to the second user. Further, the second user end may send the second storage path and the file information of the data file to be read to the first NAS node 101. In this way, the first NAS node 101 may receive the second storage path and the file information of the data file to be read, which are sent by the second user side.
S402: the first NAS node 101 sends the second storage path to the NAS management node 103;
since the metadata information of the data file to be stored sent by different clients is stored in different second NAS nodes 102, when the data file to be read is read, the data file to be read needs to be determined in the cloud storage subsystem according to the metadata information of the data file to be read. Therefore, when reading the data file to be stored, the second NAS node 102 needs to determine the metadata information for storing the data file to be stored. Furthermore, since the correspondence relationship between the preset storage path and the second NAS node 102 is stored in the NAS management node 103, the first NAS node 101 may request the NAS management node 103 for the second NAS node 102 corresponding to the second storage path.
Based on this, after receiving the second storage path and the data information of the data file to be read, the first NAS node 101 may send the second storage path to the NAS management node 103.
S403: the NAS management node 103 determines, from the at least one first NAS node 102, a second target NAS node 102 corresponding to the received second storage path based on the correspondence;
after receiving the second storage path, the NAS management node 103 may search the second NAS node 102 corresponding to the second storage path in the preset correspondence between the storage path and the second NAS node 102, and may further determine the searched second NAS node 102 as the second target NAS node 102 corresponding to the received second storage path.
S404: the NAS management node 103 feeds back the second node information of the second target NAS node 102 to the first NAS node 101;
after determining the second target NAS node 102 corresponding to the received second storage path, the NAS management node 103 may feed back the second node information of the second target NAS node 102 to the second NAS node 101, so that the second NAS node 101 may obtain, from the second target NAS node 102, the second metadata information matched with the file information of the data file to be read.
Optionally, the second node information of the second target NAS node 102 may include: an identification of the second target NAS node 102, an offset address of available storage space of the second target NAS node 102, and the like. In this regard, the embodiment of the present invention is not particularly limited as long as the first NAS node 101 can determine the second target NAS node 102 in the at least one second NAS node 102 according to the first node information.
S405: the first NAS node 101 acquires, based on the received second node information, second metadata information matching the file information from the second target NAS node 102;
after receiving the second node information of the second target NAS node 102, the first NAS node 101 may search, from the metadata information stored in the second target NAS node 102, second metadata information that matches file information of the data file to be read, so that when the second metadata information that matches the file information of the data file to be read is obtained.
Optionally, the second metadata information of the data file to be read may include: and path information, bucket information, key value and other information of the data file to be read. In this regard, the embodiment of the present invention is not specifically limited, as long as the data file to be read can be determined in the cloud storage subsystem according to the second metadata information.
S406: the first NAS node 101 reads a data file to be read from the cloud storage subsystem 104 based on the second metadata information;
after the second metadata information matched with the file information of the data file to be read is acquired, the first NAS node 101 may determine the data file to be read from the cloud storage subsystem 104 based on the second metadata information, and then read the data file to be read.
S407: the first NAS node 101 feeds back the acquired data file to be read to the second user side.
Thus, when the data file to be read is read, the first NAS node 101 feeds back the acquired data file to be read to the second user side.
Optionally, in a specific implementation manner, the reading, by the first NAS node 101, of the data file to be read from the cloud storage subsystem 104 based on the second metadata information, and feeding back the obtained data file to be read to the second user side may include the following steps B1-B4:
step B1: determining each second data block containing a data file to be read in the cloud storage subsystem based on the second metadata information;
the quantity of each second data block is a preset second data quantity;
after the read data file is determined in the cloud storage subsystem according to the second metadata information, each second data block containing the data file to be read can be determined in the cloud storage subsystem according to a preset second data volume.
Each second data block may also include data that does not belong to the data file to be read.
Step B2: sending target data blocks including initial data of the data file to be read in each second data block to a preset synchronous read cache;
in the data viewing process, the starting data of the data file is usually started, so that after each second data block containing the data file to be read is determined, a target data block including the starting data of the data file to be read in each second data block can be sent to a preset synchronous read cache.
Step B3: feeding back the data in the target data block to the second user end;
after sending the target data block to the preset synchronous read cache, the first NAS node 101 may feed back the data in the target data block to the second user end.
Optionally, in an embodiment of this specific implementation manner, the step B3 may include the following steps B31-B32:
step B31: acquiring a target offset address and a target data volume of data requested by an interface between a first NAS node and a second user end;
step B32: reading target data from the target data block according to the target offset address and the target data amount, and feeding the target data back to the second user end; return to step B31.
It can be understood that, because the read interface between the second user end and the first NAS node 101 has a certain traffic limit, the data in the target data block is not fed back from the first NAS node 101 to the second user end at one time, but the target data block is divided into multiple groups of target data for transmission according to the data amount that can be read by the read interface between the second user end and the first NAS node 101 each time. Furthermore, in each data transmission process, the first NAS node 101 may obtain a target offset address and a target data size of data requested by an interface between the first NAS node and the second user end, and may further read the target data from the target data block according to the target offset address and the target data size, and feed the target data back to the second user end. In this way, when the data in the target data block is not all fed back to the second user end, the first NAS node 101 may return to perform step B1 described above.
Wherein, each time the step B1 is executed, the target offset address may be changed according to the increase of the data fed back to the second user end in the target data block.
Step B4: according to the offset address of the included data in the cloud storage subsystem, sending other data blocks except the target data block in the second data blocks to a preset asynchronous read cache queue;
in order to ensure that the ordering of each data in the to-be-read data file fed back to the second user is the same as the ordering of each data of the to-be-read data file stored in the cloud storage subsystem, the first NAS node 101 may send each data block except the target data block in each second data block to a preset asynchronous read cache queue according to the offset address of the data included in the second data block in the cloud storage subsystem.
Step B5: and after the data feedback in the target data block is finished, sending the first second data block in the asynchronous read cache to the synchronous read cache as a new target data block according to the first-in first-out sequence, and returning to the step B3 until the data reading stop condition is met.
After the data feedback in the target data block is completed, the first NAS node 101 may send the first second data block in the asynchronous read cache to the synchronous read cache as a new target data block according to a first-in first-out sequence, that is, send the second data block with the earliest storage time in the asynchronous read cache to the synchronous read cache as a new target data block. In this way, the step B3 is returned to, and the data in the target data block is fed back to the second user side until the data reading stop condition is satisfied.
Optionally, in a specific implementation manner, the data reading stop condition may be: the first NAS node 101 receives a data reading stop instruction sent by the second user equipment.
Optionally, in another specific implementation manner, the data reading stop condition may be: and feeding back all data in the data file to be read to the second user terminal.
At this point, a completed read store process ends.
Illustratively, as shown in fig. 5, it is a schematic diagram of a specific embodiment of the specific implementation shown in fig. 4.
The first NAS node may cyclically invoke a lead interface in a fuse system installed on the second user end, and obtain a target offset address and a target data volume of data requested each time, where the target data volume is 128K. And then, determining second data blocks with the data volume of 2M containing the data file to be read in the cloud storage subsystem, and sending the second data blocks containing the initial data of the data file to be read to the synchronous read cache, so that the data in the second data blocks in the synchronous read cache are sequentially fed back to the second user side according to the target offset address and the target data volume. And storing other second data blocks into the asynchronous read queue buffer according to the offset addresses of the other second data blocks in the cloud storage subsystem. In this way, after all the data in the second data block in the synchronous read cache is fed back to the second user end, the first second data block buffered by the asynchronous read queue can be sent to the synchronous read cache, and thus, the data in the new second data block in the synchronous read cache can be fed back to the second user end again.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. A data storage system based on NAS protocol, the data storage system comprising: the system comprises a first network attached storage NAS node, at least one second NAS node, an NAS management node and a cloud storage subsystem;
the first NAS node is used for receiving a first storage path and a data file to be stored, which are sent by a first user side, and sending the first storage path to the NAS management node;
the NAS management node is configured to determine, from the at least one second NAS node, a first target NAS node corresponding to the received first storage path based on a correspondence between a preset storage path and the second NAS node, and feed back first node information of the first target NAS node to the first NAS node;
the first NAS node is further configured to report, to the first target NAS node, first metadata information of the data file to be stored based on the received first node information, so that the first target NAS node stores the received first metadata information; storing the data file to be stored in the cloud storage subsystem, and reporting metadata updating information to the first target NAS node after the data file to be stored is stored, so that the first target NAS node updates the stored first metadata information by using the received metadata updating information;
the cloud storage subsystem is configured to store the data file to be stored, which is sent by the first NAS node.
2. The system of claim 1,
the first NAS node is further configured to receive a second storage path and file information of a data file to be read, which are sent by a second user side, and send the second storage path to the NAS management node;
the NAS management node is further configured to determine, based on the correspondence, a second target NAS node corresponding to the received second storage path from the at least one second NAS node, and feed back second node information of the second target NAS node to the first NAS node;
the first NAS node is further configured to obtain, based on the received second node information, second metadata information matched with the file information from the second target NAS node, read the data file to be read from the cloud storage subsystem based on the second metadata information, and feed back the obtained data file to be read to the second user end;
the cloud storage subsystem is further configured to send the data file to be read to the first NAS node.
3. The system of claim 1, wherein the data file to be stored comprises: a plurality of sub data files, wherein each sub data file is sent by the first user end in each IO interaction process; the first NAS node stores the data file to be stored to the cloud storage subsystem, including:
aggregating the data in each subdata file into each first data block according to the offset address of each subdata file in the data file to be stored; the data volume of each first data block is a preset first data volume;
according to the sequence of the aggregation time of each first data block from early to late, each first data block is sent to a preset asynchronous write cache queue;
and according to the first-in first-out sequence, asynchronously storing each first data block in the asynchronous write cache queue to the cloud storage subsystem.
4. The system of claim 3, wherein each first data block carries an offset address of the first data block in the data to be stored; the asynchronous storage of each first data block in the asynchronous write cache queue to the cloud storage subsystem by the first NAS node includes:
determining a storage object used for storing the first data block in the cloud storage subsystem according to the offset address carried by each first data block and a preset fragmentation size, and storing the first data block into the determined storage object; wherein the slice size is: and when the data file to be stored is stored in the cloud storage subsystem according to fragments, the size of each fragment is large.
5. The system according to claim 2, wherein the first NAS node reads the data file to be read from the cloud storage subsystem based on the second metadata information, and feeds back the acquired data file to be read to the second user side, and the method includes:
determining each second data block containing the data file to be read in the cloud storage subsystem based on the second metadata information; the quantity of each second data block is a preset second data quantity;
sending a target data block including the initial data of the data file to be read in each second data block to a preset synchronous read cache, and feeding back the data in the target data block to the second user end;
according to the offset address of the included data in the cloud storage subsystem, sending other data blocks except the target data block in the second data blocks to a preset asynchronous read cache queue;
and after the data in the target data block is fed back, sending the first second data block in the asynchronous read cache to the synchronous read cache as a new target data block according to a first-in first-out sequence, and returning to the step of feeding back the data in the target data block to the second user side until a data reading stop condition is met.
6. The system of claim 5, wherein the first NAS node feeding back data in the target data block to the second user end comprises:
acquiring a target offset address and a target data volume of data requested by an interface between the first NAS node and the second user end;
reading target data from the target data block according to the target offset address and the target data volume, and feeding the target data back to the second user end; and returning to the step of acquiring the target offset address and the target data volume of the data requested by the interface between the first NAS node and the second user end.
7. The system of claim 5, wherein the data read stop condition comprises:
the first NAS node receives a data reading stopping instruction sent by the second user end; or feeding back all data in the data file to be read to the second user terminal.
8. The system according to any of claims 1-7, wherein the first NAS node is configured to mount a first database for storing the correspondence; each second NAS node hosts a second database for storing metadata information for the data.
9. The system according to any of claims 1-7, wherein the first NAS node comprises: a first NAS master device and at least one first NAS standby device; each first NAS standby device is used for upgrading to a new first NAS main device when the first NAS main device fails;
each second NAS node comprises: a second NAS master device and at least one second NAS slave device; and the first NAS standby equipment is used for upgrading the second NAS main equipment to new second NAS main equipment when the second NAS main equipment fails.
CN202110674904.2A 2021-06-17 2021-06-17 Data storage system based on NAS protocol Active CN113296714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674904.2A CN113296714B (en) 2021-06-17 2021-06-17 Data storage system based on NAS protocol

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674904.2A CN113296714B (en) 2021-06-17 2021-06-17 Data storage system based on NAS protocol

Publications (2)

Publication Number Publication Date
CN113296714A true CN113296714A (en) 2021-08-24
CN113296714B CN113296714B (en) 2022-03-04

Family

ID=77328688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674904.2A Active CN113296714B (en) 2021-06-17 2021-06-17 Data storage system based on NAS protocol

Country Status (1)

Country Link
CN (1) CN113296714B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103890A1 (en) * 2021-12-08 2023-06-15 北京字节跳动网络技术有限公司 Capacity expansion method and apparatus, and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110271067A1 (en) * 2010-05-03 2011-11-03 Pixel8 Networks, Inc. Efficient Cloud Network Attached Storage
CN102307221A (en) * 2011-03-25 2012-01-04 国云科技股份有限公司 Cloud storage system and implementation method thereof
US20130151884A1 (en) * 2011-12-09 2013-06-13 Promise Technology, Inc. Cloud data storage system
CN104679665A (en) * 2013-12-02 2015-06-03 中兴通讯股份有限公司 Method and system for achieving block storage of distributed file system
US20160034356A1 (en) * 2014-08-04 2016-02-04 Cohesity, Inc. Backup operations in a tree-based distributed file system
CN106293490A (en) * 2015-05-12 2017-01-04 中兴通讯股份有限公司 Data storage, the method read, Apparatus and system
US20190278746A1 (en) * 2018-03-08 2019-09-12 infinite io, Inc. Metadata call offloading in a networked, clustered, hybrid storage system
US10620883B1 (en) * 2019-01-04 2020-04-14 Cohesity, Inc. Multi-format migration for network attached storage devices and virtual machines
CN111209259A (en) * 2018-11-22 2020-05-29 杭州海康威视系统技术有限公司 NAS distributed file system and data processing method
CN111399760A (en) * 2019-11-19 2020-07-10 杭州海康威视系统技术有限公司 NAS cluster metadata processing method and device, NAS gateway and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110271067A1 (en) * 2010-05-03 2011-11-03 Pixel8 Networks, Inc. Efficient Cloud Network Attached Storage
CN102307221A (en) * 2011-03-25 2012-01-04 国云科技股份有限公司 Cloud storage system and implementation method thereof
US20130151884A1 (en) * 2011-12-09 2013-06-13 Promise Technology, Inc. Cloud data storage system
CN104679665A (en) * 2013-12-02 2015-06-03 中兴通讯股份有限公司 Method and system for achieving block storage of distributed file system
US20160034356A1 (en) * 2014-08-04 2016-02-04 Cohesity, Inc. Backup operations in a tree-based distributed file system
CN106293490A (en) * 2015-05-12 2017-01-04 中兴通讯股份有限公司 Data storage, the method read, Apparatus and system
US20190278746A1 (en) * 2018-03-08 2019-09-12 infinite io, Inc. Metadata call offloading in a networked, clustered, hybrid storage system
CN111209259A (en) * 2018-11-22 2020-05-29 杭州海康威视系统技术有限公司 NAS distributed file system and data processing method
US10620883B1 (en) * 2019-01-04 2020-04-14 Cohesity, Inc. Multi-format migration for network attached storage devices and virtual machines
CN111399760A (en) * 2019-11-19 2020-07-10 杭州海康威视系统技术有限公司 NAS cluster metadata processing method and device, NAS gateway and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103890A1 (en) * 2021-12-08 2023-06-15 北京字节跳动网络技术有限公司 Capacity expansion method and apparatus, and electronic device

Also Published As

Publication number Publication date
CN113296714B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
US7818287B2 (en) Storage management system and method and program
US20160359970A1 (en) Virtual multi-cluster clouds
CN107562757B (en) Query and access method, device and system based on distributed file system
US20050234867A1 (en) Method and apparatus for managing file, computer product, and file system
CN107888657A (en) Low latency distributed memory system
CN102411598B (en) Method and system for realizing data consistency
CN108776682B (en) Method and system for randomly reading and writing object based on object storage
US9307024B2 (en) Efficient storage of small random changes to data on disk
CN113268472B (en) Distributed data storage system and method
CN111400334B (en) Data processing method, data processing device, storage medium and electronic device
CN108540510B (en) Cloud host creation method and device and cloud service system
CN107818111B (en) Method for caching file data, server and terminal
CN113360456B (en) Data archiving method, device, equipment and storage medium
CN103501319A (en) Low-delay distributed storage system for small files
CN113806300B (en) Data storage method, system, device, equipment and storage medium
CN109299111A (en) A kind of metadata query method, apparatus, equipment and computer readable storage medium
CN113296714B (en) Data storage system based on NAS protocol
CN107038092B (en) Data copying method and device
US10387043B2 (en) Writing target file including determination of whether to apply duplication elimination
CN111225003B (en) NFS node configuration method and device
CN107493309B (en) File writing method and device in distributed system
CN110618790A (en) Mist storage data redundancy removing method based on repeated data deletion
CN112148206A (en) Data reading and writing method and device, electronic equipment and medium
CN107181773A (en) Data storage and data managing method, the equipment of distributed memory system
CN112749172A (en) Data synchronization method and system between cache and database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant