CN104111804B - A kind of distributed file system - Google Patents
A kind of distributed file system Download PDFInfo
- Publication number
- CN104111804B CN104111804B CN201410295985.5A CN201410295985A CN104111804B CN 104111804 B CN104111804 B CN 104111804B CN 201410295985 A CN201410295985 A CN 201410295985A CN 104111804 B CN104111804 B CN 104111804B
- Authority
- CN
- China
- Prior art keywords
- big
- file
- server
- big file
- metadata
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention relates to a kind of distributed file system.It includes:Big document storage server is used to store the large file block after splitting, wherein big file is the file more than default size;Big file metadata management server, for storing the metadata of big file, storing the map information of large file block on big document storage server, the NameSpace of the big file of management and the solicited message for handling user;Caching server, for storing small documents, the metadata of small documents ease up the big big file of nonresident portion visit capacity, wherein, small documents refer to the file less than or equal to default size.Big file and small documents are stored separately by the present invention, and big file carries out piecemeal and is stored on big document storage server, and small documents are then stored on caching server, effectively improve size file read-write efficiency.
Description
Technical field
The present invention relates to computer memory technical field, more particularly, to a kind of distributed file system.
Background technology
With the popularization of cloud computing and perfect, increasing user by the storage of personal or business data beyond the clouds, these
Data, which not only include big file, also includes small documents, and this kind of data have that data volume is big, reading frequency is higher than write frequency, needs
The features such as carrying out quick-searching.
At present, file system used in cloud service provider is broadly divided into two classes:NFS (Network
File System, NFS) and distributed file system (Hadoop Distributed File System, HDFS).Network text
Part system refers to cloud service provider and carries out virtual partition on the server, and the disk space for dividing one piece is deposited to user's progress file
Storage, and each reading and writing of files of user is required for first logging in remote dummy server, and file read-write is carried out on virtual disk, such
The defect of system is that all customer data is all stored on same server, to user's normal operating if server failure
Produce significant impact.
Distributed file system refers to the file for carrying out the common data storage of cluster using multiple servers in service provider
System, user needed in reading and writing of files send ask, background server processing user request and by request results return to
Family, currently used widest distributed file system is HDFS, but the system has two major defects:Can not efficiently it deposit
Store up substantial amounts of small documents and only single name node carries out global administration.Proposed to improve the later researcher of these defects
A variety of file system, but but there is respective defect, such as mass small documents storage propose TFS(Taobao File
System), substantial amounts of small documents are merged into one big file and are stored on data server by it, and this method does not have compared with HDFS
There is obvious progress, small documents only are integrated into a large file block is stored on data server, and adds one
The standby name server of platform.And standby name server does not participate in processing user's request directly, only when name server is delayed machine
Standby name server just replaces name server processing user's request afterwards.Party's law limitation is that name server is mainly responsible for
User's request is handled, and memory space is fixed, when the bottle that data volume is increasing, and its performance will develop as restricted T FS
Neck.And when name server catastrophe failure causes loss of data, standby name server is needed while same with name server
Step data, the request of an edge responses user, now the load of standby name server is excessive., will in MapR file system
File data blocks and metadata are stored on node simultaneously, overcome the bottleneck of single name server, but by big file and small
File is stored together simultaneously, is wasted storage resource and is not easy to management.
Current distributed file system presence can not effectively store small documents and solve the problem of single management node.With
The file data at family is various, and size is different, and the file system storage efficiency of cloud server terminal is most important, also direct shadow
Ring the failure response and resume speed of file system.Distributed file system reasonable in design, can rapidly recover file
The failure that storing process occurs, there is extremely important meaning and actual application value.
The content of the invention
The present invention is to overcome at least one defect described in above-mentioned prior art(It is not enough)It can effectively store small there is provided one kind
The distributed file system of file.
In order to solve the above technical problems, technical scheme is as follows:
A kind of distributed file system, including:
Big document storage server is used to store the large file block after splitting, wherein big file is more than default size
File;
Big file metadata management server, for storing the metadata of big file, storing big document storage server
Map information, the NameSpace of the big file of management and the solicited message for handling user of large file block;
Caching server, for storing small documents, the metadata of small documents ease up the big big file of nonresident portion visit capacity, its
In, small documents are the file less than or equal to default size.
In such scheme, the big document storage server includes some, and big file metadata management server includes
At least three, caching server includes at least three.
In such scheme, big file is stored using adaptive mode between at least three big file metadata management server
In metadata and big document storage server the map information of large file block and undertake user request processing task;
Using adaptive, dynamic adjustment mode data storage and processing user's request between at least three caching servers.
In such scheme, the map information of large file block on any big file metadata and big document storage server
It is stored at least 2 big file metadata management server.
It is provided with and is deposited for storing on small documents metadata and caching server in such scheme, on caching server
Store up the meta-data preservation area of the metadata of big file, the small documents conservation zone for storing small documents and visited for caching part
The big file cache area of the big big file of the amount of asking.
In such scheme, counter is provided with caching server, for realizing big file access classification mechanism, is implemented
Process is:When user reads and writes some big file by caching server request, the big file access amount adds 1;
Visit capacity threshold values is set;
Visit capacity is referred to as often accessing big file higher than the big file of visit capacity threshold values;
Caching server is ranked up from high to low for often accessing big file according to visit capacity.
In such scheme, the storage mode of the big big file of storage part visit capacity is in caching server:
When the buffer space of caching server is enough, new big file is directly appended to big file and delayed by caching server
Deposit area and the metadata of new big file is added in meta-data preservation area;
When the big file cache area insufficient space of caching server, if caching server needs to add a new user
The big file often accessed, then delete the big file of the minimum frequent access of visit capacity in big file cache area until space foot
It is enough, new big file is then added to big file cache area.
In such scheme, the caching server preserves small documents metadata with non-permanent manner, permanent in daily record form
Small documents are stored, is preserved with update mode and often accesses big file metadata.
In such scheme, after wherein one big file metadata management server failure, system guides user please at once
Other big file metadata management servers are asked to be handled, until the big file metadata management server of failure is recovered just
Often;
The big file metadata management server recovered after failure is sky, then other big file metadata management servers
Same big file metadata and large file block map before failure synchronous with the big file metadata management server
Information.
In such scheme, after a wherein caching server failure, system guides user's request to other cachings at once
Server process, until the caching server of failure recovers normal;
If the caching server recovered after failure is sky, before other caching servers failure synchronous with the caching server
With the caching server identical small documents and small documents metadata.
Compared with prior art, the beneficial effect of technical solution of the present invention is:
(1)Big file and small documents are stored separately by the distributed file system of the present invention, and big file carries out piecemeal storage
On big document storage server, and small documents are then stored on caching server.It is direct when user needs read-write small documents
Then access cache server makes respective operations, and such read-write efficiency is far above traditional first access metadata management server
Visit again the mode of data storage server.And if user needs to read and write big file and first accesses big file metadata management service
Device, corresponding big document storage server is accessed after positional information is obtained.This system can effectively preserve big file and small documents,
And improve the read-write efficiency of file.
(2)The system of the present invention uses at least 3 caching servers and at least 3 big file metadata management server,
Wherein between each server of same level interconnect, can effectively break through the bottleneck of traditional single management server, when a large number of users simultaneously
When accessing low volume data, system can carry out load balancing by adaptive, dynamic adjustment mode to multiple servers, it is to avoid go out
Existing certain server is delayed the situation of machine because of itself processing and storage capacity is not enough but processing task is overweight, is efficiently solved single
The various problems that management node is brought.
(3)The distributed file system of the present invention is in certain big file metadata management server and/or caching server
During failure, user's request can be guided to be handled to other big file metadata management servers in time, use is can guarantee that
Family is unaffected for the normal operating of stored file, and the loss of data caused by failure, system of the invention
Recover the data of failed server by synchronization mechanism using other servers, fault recovery efficiency can be effectively improved.
Brief description of the drawings
Fig. 1 is big document storage server and big file member number in a kind of distributed file system specific embodiment of the invention
The schematic diagram connected according to management server.
Fig. 2 is big file metadata management server and caching in a kind of distributed file system specific embodiment of the invention
The schematic diagram of server connection.
Fig. 3 is caching server internal zone dividing schematic diagram in a kind of distributed file system specific embodiment of the invention.
Embodiment
Accompanying drawing being given for example only property explanation, it is impossible to be interpreted as the limitation to this patent;
In order to more preferably illustrate the present embodiment, some parts of accompanying drawing have omission, zoomed in or out, and do not represent actual product
Size;
To those skilled in the art, it is to be appreciated that some known features and its explanation, which may be omitted, in accompanying drawing
's.
In the description of the invention, it is to be understood that term " first ", " second " are only used for describing purpose, and can not
It is interpreted as indicating or implies relative importance or imply the quantity of indicated technical characteristic.Thus, " first " of restriction, "
One or more this feature can be expressed or be implicitly included to two " feature.In the description of the invention, unless otherwise saying
Bright, " multiple " are meant that two or more.
In the description of the invention, it is necessary to illustrate, unless otherwise clearly defined and limited, term " installation ", " company
Connect " it should be interpreted broadly, for example, it may be being fixedly connected or being detachably connected, or it is integrally connected;It can be machine
Tool is connected or electrically connected;It can be joined directly together or be indirectly connected with by intermediary, it may be said that two
The connection of element internal.For the ordinary skill in the art, with concrete condition above-mentioned term can be understood in the present invention
Concrete meaning.
Technical scheme is described further with reference to the accompanying drawings and examples.
Embodiment 1
As illustrated in fig. 1 and 2, it is a kind of Organization Chart of distributed file system specific embodiment of the invention.Referring to Fig. 1 and 2,
A kind of distributed file system of this specific embodiment specifically includes big document storage server, big file metadata management service
Device, caching server, big document storage server, big file metadata management server, caching server three are sequentially connected with
Together, wherein caching server be used for connect user, with receive user request, processing user ask and return request results to
User, in the process, user's request forwarding would generally be issued big file member when caching server can not handle user's request
Data management server makes requests on processing, and the result of request processing will also be returned to user by caching server, and now be delayed
Deposit server not ask user and ask the result of processing to do any processing, be used only as the retransmission unit of data.
Big document storage server is used to store the large file block after splitting, wherein big file is more than default size
File;In preset procedures, a threshold values is generally specified by keeper, from actual angle and with reference to alternative document system
From the point of view of, this threshold values is usually 1M, is big file when file size exceedes the threshold values, to be small when less than or equal to the threshold values
File;Wherein, the size of large file block is usually fixed, and can be obtained by keeper is default.Big document storage server
The metadata of preserved large file block is also generally sent to big file metadata management server after preserving data;
Big file metadata management server, big number of files is preserved for preserving, managing on big document storage server
According to the mapping relations of block, the metadata of large file block is preserved for preserving big document storage server, for being responsible for place
Reason user request and the NameSpace for managing file;
Caching server, for storing small documents, the metadata of small documents ease up the big big file of nonresident portion visit capacity, its
In, small documents are the file less than or equal to default size.
Big file progress piecemeal is stored in greatly by the scheme based on this specific embodiment, distributed file system of the invention
On document storage server, and small documents are then stored on caching server, small documents are accessed by caching server, by big
Document storage server and big file metadata management server access big file, and separated protect is implemented to big file and small documents
Deposit, on the one hand can effectively preserve small documents.On the other hand difference is preserved together with big file and small documents in the prior art
Method, is effectively saved storage resource, and greatly facilitate the management of file.
In specific implementation process, the problem of in order to solve single management node, this specific embodiment is in distribution text
There is provided some big document storage server, at least three big file metadata management server and at least three in part system
Caching server, all big file metadata management servers are mutually interconnected, and are mutually interconnected between all caching servers.As schemed
Distributed file system is provided with 12 big document storage server, 3 big file metadata pipe in Organization Chart shown in 1 and 2
Manage server and 3 caching servers.The server performance of same type is identical under normal circumstances, different types of server performance
Can be with differentiation.
Wherein, it is big using adaptive, dynamic adjustment mode storage between at least three big file metadata management server
On file metadata and big document storage server the map information of large file block and undertake user request processing appoint
Business.Specifically, when carrying out data storage and processing user's request, performance and deposited in all big file metadata management servers
Energy storage power relatively strong big file metadata management server stores the metadata and its mapping letter of more large file blocks
Breath, and more user's request processing tasks are undertaken, and performance and the relatively weak big file metadata management of storage capacity take
Business device then stores the metadata and its map information of relatively small number of large file block, and undertakes relatively small number of user's request
Processing task.
Wherein, using adaptive, dynamic adjustment mode data storage and processing user between at least three caching servers
Request.Specifically, when carrying out data storage and processing user's request, in all caching servers, for performance and storage energy
Power relatively strong caching server stores more data, and undertakes more users' request processing tasks, and for performance
Relatively small number of data are then stored with storage capacity weak caching server relatively and undertake relatively small number of task.
Based on such scheme, the storage to big file, small documents, system is carried out by way of adaptive, dynamic adjustment
The load balancing of server, makes performance and the of a relatively high server of storage capacity and performance and the relatively weak clothes of storage capacity
Data storage and user's request processing between business device reach a relatively reasonable distribution, can effectively break through traditional single management
The bottleneck of server, it is to avoid certain server occur and delayed because of itself processing and storage capacity is not enough but processing task is overweight machine
Situation.
In specific implementation process, in order to ensure in the normal operating of user in the case of server failure, such scheme, this
It is embodied between big file metadata management server by the way of redundancy backup, i.e., any big file metadata and big
The map information of large file block is stored at least 2 big file metadata management server on document storage server.
When wherein one big file metadata management server failure, it can also be obtained from other big file metadata management servers
Win the confidence breath.
In specific implementation process, as shown in figure 3, setting meta-data preservation area, small documents conservation zone on caching server
With big file cache area, meta-data preservation area is responsible for preserving the metadata and caching for depositing small documents inside the caching server
The metadata of big file on caching server, small documents metadata is persistence, unless small documents are deleted by user
Or the system failure, and big file metadata takes newer to delete;Small documents conservation zone, stores small documents in the form of daily record,
The partial data is persistence, unless deleted or the system failure by user;Big file cache area, storage user's visit capacity is big
Big file, the partial data is that newer is deleted, not persistence;
In specific implementation process, counter is set on caching server, for realizing slow big file access classification
Mechanism, it implements process and is:When user accesses some big file by the caching server, the big file access amount adds
1;Visit capacity threshold values is pre-set for big file;Visit capacity is referred to as often accessing big text higher than the big file of visit capacity threshold values
Part;Caching server is ranked up from high to low for often accessing big file according to visit capacity.
In a preferred embodiment, the storage mode of the big big file of part visit capacity is stored in caching server
For:
When the buffer space of caching server is enough, new big file is directly appended to big file and delayed by caching server
Deposit area and the metadata of new big file is added in meta-data preservation area;
When the big file cache area insufficient space of caching server, if caching server needs to add a new user
The big file often accessed, then delete the big file of the minimum frequent access of visit capacity in big file cache area until space foot
It is enough, new big file is then added to big file cache area.
In such scheme, the user representative all users related to the distributed file system of the present invention are often referred to
Do not cache or very little buffer zone user, such as intelligent watch is wearable to make smart machine.
In specific implementation process, recovery mechanism is proposed in the distributed file system of the present invention, including it is slow
Deposit server failure to recover and big two kinds of situations of file metadata management server fault recovery, specifically:
After wherein one big file metadata management server failure, system guides user's request to arrive other big texts at once
Part metadata management server is handled, until the big file metadata management server of failure recovers normal;
The big file metadata management server recovered after failure is sky, then other big file metadata management servers
Same big file metadata and large file block map before failure synchronous with the big file metadata management server
Information.
After a wherein caching server failure, system guides user's request to the processing of other caching servers at once,
Until the caching server of failure recovers normal;
If the caching server recovered after failure is sky, before other caching servers failure synchronous with the caching server
With the caching server identical small documents and small documents metadata.
The distributed file system of the present invention is described in detail with reference to an instantiation.
As shown in figure 1, distributed file system includes document storage server 1-12 12 big, 33 caching servers
File metadata management server 1-3 1-3 and 3 big, big file metadata management server 1 preserves big document storage server
1st, on 2,3,4,5,6,9,10 stored information metadata, big file metadata management server 2 preserves big file storage clothes
The metadata of stored information on business device 1,2,5,6,7,8,11,12, big file metadata management server 3 preserves big file and deposited
Store up the metadata of stored information on server 3,4,7,8,9,10,11,12;
As illustrated in fig. 2, it is assumed that system stores the small documents of 12 users, then 3 caching server storage modes are:Caching
Server 1 preserves small documents and its metadata that user 1,2,3,4,5,6,9,10 is stored;The preservation of caching server 2 user 1,
2nd, 5,6,7,8,11,12 small documents stored and its metadata;Caching server 3 preserves user 3,4,7,8,9,10,11,12
The small documents and its metadata stored.Wherein, also mark off certain region step by step to cache use in three caching servers
The big file that family is often accessed.
As shown in Fig. 2 caching server failover procedure is:After caching server 1 breaks down, user's request quilt
It is assigned to caching server 2 and caching server 3 is handled, when it because cause specific loses number after the recovery of caching server 1
According to when, the small documents and its metadata that the synchronous relevant user 1,2,5,6 of caching server 2 is stored are to caching server 1, caching
The small documents and its metadata that the synchronous relevant user 3,4,9,10 of server 3 is stored are to caching server 1.
As shown in figure 1, big file metadata management server failover procedure is:When big file metadata management service
After device 1 breaks down, user's request is assigned to big file metadata management server 2 and big file metadata management server
3 are handled, after the big recovery of file metadata management server 1 when it loses data because of cause specific, big file metadata
The related data mapping of the synchronous big document storage server 1,2,5,6 of management server 2 and metadata give big file metadata
Management server 1, the related data mapping of the synchronous document storage server 3,4,9,10 greatly of big file metadata management server 3
And metadata gives big file metadata management server 1.
The same or analogous part of same or analogous label correspondence;
Position relationship is used for being given for example only property explanation described in accompanying drawing, it is impossible to be interpreted as the limitation to this patent;
Obviously, the above embodiment of the present invention is only intended to clearly illustrate example of the present invention, and is not pair
The restriction of embodiments of the present invention.For those of ordinary skill in the field, may be used also on the basis of the above description
To make other changes in different forms.There is no necessity and possibility to exhaust all the enbodiments.It is all this
Any modifications, equivalent substitutions and improvements made within the spirit and principle of invention etc., should be included in the claims in the present invention
Protection domain within.
Claims (10)
1. a kind of distributed file system, it is characterised in that including:
Big document storage server is used to store the large file block after splitting, wherein big file is the text more than default size
Part;
Big file metadata management server, for storing the metadata of big file, storing big text on big document storage server
Map information, the NameSpace of the big file of management and the solicited message for handling user of part data block;
Caching server, for storing small documents, the metadata of small documents ease up the big big file of nonresident portion visit capacity, wherein,
Small documents are the file less than or equal to default size;
Wherein, caching server is used to connect user, is asked with receiving user, processing user asks and returns to request results to use
Family, in the process, big file member number would generally be issued when caching server can not handle user's request by user's request forwarding
Processing is made requests on according to management server, the result of request processing will also be returned to user by caching server, and now be cached
Server is not asked user and asks the result of processing to do any processing, is used only as the retransmission unit of data.
2. distributed file system according to claim 1, it is characterised in that if the big document storage server includes
Dry platform, big file metadata management server includes at least three, and caching server includes at least three.
3. distributed file system according to claim 2, it is characterised in that at least three big file metadata management clothes
Between business device big number of files on big file metadata and big document storage server is stored by the way of adaptive, dynamic adjustment
According to block map information and undertake user request processing task;
Using adaptive, dynamic adjustment mode data storage and processing user's request between at least three caching servers.
4. distributed file system according to claim 1, it is characterised in that any big file metadata and big file are deposited
The map information of large file block is stored at least 2 big file metadata management server on storage server.
5. distributed file system according to claim 4, it is characterised in that be provided with caching server for storing
The meta-data preservation area of the metadata of big file is stored in small documents metadata and caching server, for storing small documents
Small documents conservation zone and big file cache area for caching the big big file of part visit capacity.
6. distributed file system according to claim 5, it is characterised in that counter is provided with caching server, is used
In realizing big file access classification mechanism, the process of implementing is:As user, by caching server request read-write, some is big
During file, the big file access amount adds 1;
Visit capacity threshold values is set;
Visit capacity is referred to as often accessing big file higher than the big file of visit capacity threshold values;
Caching server is ranked up from high to low for often accessing big file according to visit capacity.
7. distributed file system according to claim 6, it is characterised in that part visit capacity is stored in caching server
The storage mode of big big file is:
When the buffer space of caching server is enough, new big file is directly appended to big file cache area by caching server
And the metadata of new big file is added in meta-data preservation area;
When the big file cache area insufficient space of caching server, if caching server needs addition, one new user is frequent
The big file accessed, then delete the big file of the minimum frequent access of visit capacity in big file cache area until space is enough, connect
And new big file is added to big file cache area.
8. the distributed file system according to any one of claim 1 to 7, it is characterised in that the caching server with
Non-permanent manner preserves small documents metadata, and small documents are permanently stored in daily record form, is preserved with update mode and often accesses big
File metadata.
9. distributed file system according to claim 2, it is characterised in that when the management of wherein one big file metadata
After server failure, system guides user's request to be handled to other big file metadata management servers at once, until event
The big file metadata management server of barrier recovers normal;
The big file metadata management server recovered after failure is sky, then other big file metadata management servers are with being somebody's turn to do
Same big file metadata and large file block map information before big file metadata management server synchronization failure.
10. distributed file system according to claim 2, it is characterised in that when wherein one caching server failure
Afterwards, system guides user's request to the processing of other caching servers at once, until the caching server of failure recovers normal;
If the caching server recovered after failure is sky, with being somebody's turn to do before other caching servers failure synchronous with the caching server
Caching server identical small documents and small documents metadata.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410295985.5A CN104111804B (en) | 2014-06-27 | 2014-06-27 | A kind of distributed file system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410295985.5A CN104111804B (en) | 2014-06-27 | 2014-06-27 | A kind of distributed file system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104111804A CN104111804A (en) | 2014-10-22 |
CN104111804B true CN104111804B (en) | 2017-10-31 |
Family
ID=51708610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410295985.5A Active CN104111804B (en) | 2014-06-27 | 2014-06-27 | A kind of distributed file system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104111804B (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105094695B (en) * | 2015-06-29 | 2018-09-04 | 浪潮(北京)电子信息产业有限公司 | A kind of storage method and system |
CN105049178A (en) * | 2015-07-23 | 2015-11-11 | 柳州易旺科技有限公司 | Multi-user information screening method |
CN105141666B (en) * | 2015-07-29 | 2018-12-14 | 江苏天联信息科技发展有限公司 | Information data storing method and device |
CN105138632A (en) * | 2015-08-20 | 2015-12-09 | 浪潮(北京)电子信息产业有限公司 | Organization and management method for file data and file management server |
CN105095511A (en) * | 2015-09-08 | 2015-11-25 | 浪潮(北京)电子信息产业有限公司 | File processing method, apparatus and system based on distributed system |
CN106020713A (en) * | 2015-09-16 | 2016-10-12 | 展视网(北京)科技有限公司 | File storage method based on buffer area |
CN105516240A (en) * | 2015-11-23 | 2016-04-20 | 浪潮(北京)电子信息产业有限公司 | Dynamic optimization framework and method for read-write performance of cluster storage system |
CN105511802B (en) * | 2015-11-24 | 2018-06-05 | 北京达沃时代科技股份有限公司 | The method and apparatus of write buffer and the synchronous method and device in disk buffering area |
CN105608193B (en) * | 2015-12-23 | 2019-03-26 | 深信服科技股份有限公司 | The data managing method and device of distributed file system |
CN108011584B (en) * | 2016-10-28 | 2020-06-26 | 丰郅(上海)新能源科技有限公司 | Photovoltaic cell on-line monitoring and intelligent management system |
CN108089888B (en) | 2016-11-21 | 2019-09-13 | 杨正 | It is a kind of that operation method and system are applied based on file system |
CN106802950A (en) * | 2017-01-16 | 2017-06-06 | 郑州云海信息技术有限公司 | A kind of method of distributed file system small documents write buffer optimization |
CN108153491B (en) * | 2017-12-22 | 2021-06-25 | 深圳市瑞驰信息技术有限公司 | Storage method and architecture capable of closing part of servers |
CN108089825B (en) * | 2018-01-11 | 2020-07-07 | 郑州云海信息技术有限公司 | Storage system based on distributed cluster |
CN109002260B (en) * | 2018-07-02 | 2021-08-13 | 深圳市茁壮网络股份有限公司 | Processing method and processing system for cache data |
CN109656874B (en) * | 2018-11-28 | 2024-03-08 | 山东蓝洋智能科技有限公司 | Method for implementing file management system in dual system |
CN112394876B (en) * | 2019-08-14 | 2024-02-23 | 深圳市特思威尔科技有限公司 | Large file storage/reading method, storage/reading device and computer equipment |
CN114116634B (en) * | 2022-01-26 | 2022-04-22 | 苏州浪潮智能科技有限公司 | Caching method and device and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101520805A (en) * | 2009-03-25 | 2009-09-02 | 中兴通讯股份有限公司 | Distributed file system and file processing method thereof |
CN102882983A (en) * | 2012-10-22 | 2013-01-16 | 南京云创存储科技有限公司 | Rapid data memory method for improving concurrent visiting performance in cloud memory system |
CN103078906A (en) * | 2012-12-26 | 2013-05-01 | 爱迪科特(北京)科技有限公司 | Document transparent moving method |
-
2014
- 2014-06-27 CN CN201410295985.5A patent/CN104111804B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101520805A (en) * | 2009-03-25 | 2009-09-02 | 中兴通讯股份有限公司 | Distributed file system and file processing method thereof |
CN102882983A (en) * | 2012-10-22 | 2013-01-16 | 南京云创存储科技有限公司 | Rapid data memory method for improving concurrent visiting performance in cloud memory system |
CN103078906A (en) * | 2012-12-26 | 2013-05-01 | 爱迪科特(北京)科技有限公司 | Document transparent moving method |
Also Published As
Publication number | Publication date |
---|---|
CN104111804A (en) | 2014-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104111804B (en) | A kind of distributed file system | |
CN105960639B (en) | Prioritization data reconstruct in distributed memory system | |
Bronson et al. | {TAO}:{Facebook’s} distributed data store for the social graph | |
CN104813321B (en) | The content and metadata of uncoupling in distributed objects store the ecosystem | |
CN102460439B (en) | Data distribution through capacity leveling in a striped file system | |
CN104657459B (en) | A kind of mass data storage means based on file granularity | |
CN109327539A (en) | A kind of distributed block storage system and its data routing method | |
CN103020257B (en) | The implementation method of data manipulation and device | |
CN102255962B (en) | Distributive storage method, device and system | |
CN102411637B (en) | Metadata management method of distributed file system | |
US9251003B1 (en) | Database cache survivability across database failures | |
CN105183839A (en) | Hadoop-based storage optimizing method for small file hierachical indexing | |
CN106021381A (en) | Data access/storage method and device for cloud storage service system | |
CN104281506A (en) | Data maintenance method and system for file system | |
CN107798130A (en) | A kind of Snapshot Method of distributed storage | |
CN104408111A (en) | Method and device for deleting duplicate data | |
CN103944958A (en) | Wide area file system and implementation method | |
CN106407355A (en) | Data storage method and device | |
CN103501319A (en) | Low-delay distributed storage system for small files | |
CN113377868A (en) | Offline storage system based on distributed KV database | |
CN102541984A (en) | File system of distributed type file system client side | |
CN102023816A (en) | Object storage policy and access method of object storage system | |
CN104572505A (en) | System and method for ensuring eventual consistency of mass data caches | |
CN110196818A (en) | Data cached method, buffer memory device and storage system | |
CN104965835B (en) | A kind of file read/write method and device of distributed file system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |