CN105138545B - The asynchronous method and system pre-read of directory entry in a kind of distributed file system - Google Patents

The asynchronous method and system pre-read of directory entry in a kind of distributed file system Download PDF

Info

Publication number
CN105138545B
CN105138545B CN201510401114.1A CN201510401114A CN105138545B CN 105138545 B CN105138545 B CN 105138545B CN 201510401114 A CN201510401114 A CN 201510401114A CN 105138545 B CN105138545 B CN 105138545B
Authority
CN
China
Prior art keywords
page
client
directory entry
read
server end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510401114.1A
Other languages
Chinese (zh)
Other versions
CN105138545A (en
Inventor
曾祥超
杨洪章
张军伟
邵冰清
李月嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Zhongke Bluewhale Information Technology Co ltd
Institute of Computing Technology of CAS
Original Assignee
Tianjin Zhongke Bluewhale Information Technology Co ltd
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Zhongke Bluewhale Information Technology Co ltd, Institute of Computing Technology of CAS filed Critical Tianjin Zhongke Bluewhale Information Technology Co ltd
Priority to CN201510401114.1A priority Critical patent/CN105138545B/en
Publication of CN105138545A publication Critical patent/CN105138545A/en
Application granted granted Critical
Publication of CN105138545B publication Critical patent/CN105138545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files

Abstract

The present invention discloses a kind of asynchronous method and system pre-read of directory entry, and this method includes:Step 1, client obtains the page call number of the directory entry, the first page of directory entry described in local page caching is searched according to the page call number, if in the presence of, then follow the steps 2, otherwise it synchronizes to send to server end and reads catalog request, the first page is sent to the client by the server end, and the first page is stored in local page caching by the client;The step 2, the client parses the ending mark eof and cookie values of the last one directory entry preserved in the first page stem, judge whether the ending mark eof is 1, if, then the client reads the directory entry in the first page, otherwise the client reads catalog request to the server end asynchronous transmission, pre-reads the lower one page for preserving the directory entry according to the page call number and the cookie values.

Description

The asynchronous method and system pre-read of directory entry in a kind of distributed file system
Technical field
The present invention relates to the interaction technique of distributed type file system client side and server end, more particularly to a kind of directory entry The asynchronous method and system pre-read.
Background technology
With the high speed development of the information technologies such as cloud computing, big data, Internet of Things, explosion is presented in global metadata informational capacity Formula increases, and according to the prediction of IDC, from now to the year two thousand twenty, data volume can double about every two years, when arriving the year two thousand twenty, the number of generation It is up to 40ZB according to amount, therefore, the demand to mass data storage is increasingly urgent to.
For the distributed file system of satisfaction storage mass data, the structure of metadata and data separating has become one kind Mainstream framework, metadata and data are stored in respectively in proprietary metadata storage devices and data storage device, server end All metadata informations are responsible for, using out-of-band access mode, client is according to the metadata information first obtained, Ke Yizhi Receiving asks proprietary data storage device to read information.
Directory entry read operation, it is therefore an objective to obtain title, inode number, type of directory entry all under the catalogue etc. Metadata information, with the directory entry list being supplied under user's catalogue.
Current parallel network file system (pNFS) is first searched in local page caching and is preserved when directory entry is read One page of directory entry information, if do not had in local page caching, synchronous obtained to server end transmission reading catalog request should Page, waiting for server end return to the requested one page of client, and client could parse the directory entry in this page one by one, but right In the big catalogue for having a large amount of directory entries, when client reads catalogue, one page can only be obtained by sending reading catalog request every time, and one page can only Including several directory entries, cause to read to need repeatedly to send when big catalogue to read catalog request, be also waited for every time according to server end Requested page is returned to, client could start to parse the directory entry in page, increase the execution time of directory entry reading, reduce The performance that directory entry is read.
Patent of invention " access method of catalogue in a kind of distributed file system ", in the invention, the content of catalogue makes It is stored with file, several sons is divided by directory entry by carrying out Hash to directory entry title for the directory entry in catalogue Collection.For different subsets, being stored in the way of striping in catalogue file, the size of band is relatively large, So that can make full use of underlying file systems when reading disk pre-reads function.For the band of each subset institute in the block There is directory entry, stored by the way of binary tree, needs to establish binary tree when to avoid first reading.For all items Band block accesses in such a way that memory maps (mmap), and Memory Allocation is carried out when avoiding accessing data in magnetic disk every time, is called System file read-write system calls caused expense.But the present invention relates to a kind of distributed type file system client side and servers The interaction technique at end, the asynchronous directory entry that pre-reads of client are cached to local page, and difference is that the patent is to provide a kind of mesh The method for recording item access, is not directed to the interaction technique of client and server.
Patent of invention " a kind of method for organizing of client directory cache in distributed file system ", in the invention, point Cloth file system uses multivariate data server framework, i.e., the content distribution of single catalogue is on multiple meta data servers. Why the framework of multivariate data is selected, primarily to the pressure of dispersion metadata access, improves concurrency.It is answered for network With the characteristic write more than few reading, which retains the content of directory entry and corresponding index node in the caching of client, with It is needed when client being avoided repeatedly to read and server is repeatedly communicated;Meanwhile in one catalogue of maiden visit, to distribution The directory entry of the catalogue on different meta data servers is pre-read parallel, meanwhile, pre-reading strategy or answer according to acquiescence Strategy is pre-read with what program issued, file inode and file content are pre-read.Since in this way, when application program needs When accessing some file of directory entry, the metadata and data of this document may have been pre-read into client local cache, because And it can greatly accelerate the execution rate of application program.But the present invention is suitable for list MDS environment, client is reading catalogue for the first time In the process, asynchronous lower one page for pre-reading the item that saves contents, when reading the directory entry in lower one page, client can be locally slow It deposits and directly reads, acquisition directory entry need not be resynchronized.Difference is that the patent is suitable for more MDS environment, under the catalogue Directory entry is stored on multiple MDS, and client obtains directory entry parallel to multiple MDS, is stored in local cache, and if should Patent is applied in single MDS environment, does not pre-read the lower one page for the item that saves contents, when client reads catalogue for the first time, is read not When directory entry in local cache, it is still necessary to synchronize and obtain to the ends MDS, so two patents are different.
Invention content
In view of the deficiencies of the prior art, the present invention proposes a kind of asynchronous method pre-read of directory entry in distributed file system And system.
The present invention proposes a kind of asynchronous method pre-read of directory entry in distributed file system, including:
Step 1, client obtains the page call number of the directory entry, and local page caching is searched according to the page call number Described in directory entry first page, and if it exists, then follow the steps 2, otherwise to server end synchronize send read catalog request, institute It states server end and the first page is sent to the client, the first page is stored in local page and delayed by the client In depositing;
The step 2, the client parse the knot of the last one directory entry preserved in the first page stem Tail tag will eof and cookie values judge whether the ending mark eof is 1, if so, the client reads described first Directory entry in page, otherwise the client is asynchronous to the server end according to the page call number and the cookie values It sends and reads catalog request, pre-read the lower one page for preserving the directory entry.
The asynchronous method pre-read of directory entry in the distributed file system further includes the client parsing current The directory entry of page, and it is shown to user.
The asynchronous method pre-read of directory entry in the distributed file system further includes when the client is resolved to institute State current page stem preserve the last one directory entry ending mark eof and cookie value when, if it is described the last one The ending mark eof of directory entry is 1, then need not pre-read the lower one page for preserving the directory entry, otherwise, the page call number adds 1, client one page under the local page cache lookup according to the page call number is if there is lower one page, then described Catalog request is read in client warp-wise server end asynchronous transmission, and otherwise the client is read to the server end asynchronous transmission Catalog request pre-reads the lower one page for preserving the directory entry.
The asynchronous method pre-read of directory entry in the distributed file system further includes the client in the local Page cache finds lower one page, checks the update flag bit of described lower one page, if the update flag bit set, institute It states client and receives the lower one page for preserving the directory entry, if the non-set of the update flag bit, the client continues It waits until that the server end returns to described lower one page, and updates one page of the local page caching.
The asynchronous method pre-read of directory entry in the distributed file system further includes the mesh for reading described one page Record item, until current page stem preserve the last one directory entry ending mark eof be 1, otherwise continue to the server It holds asynchronous transmission to read catalog request, pre-reads the lower one page for preserving the directory entry.
The present invention also proposes the asynchronous system pre-read of directory entry in a kind of distributed file system, including:
Client modules obtain the page call number of the directory entry for client, this is searched according to the page call number The first page of directory entry described in ground page cache, and if it exists, the directory entry in the page is then read, it is otherwise same to server end Step, which is sent, reads catalog request, and the first page is sent to the client by the server end, and the client is by described the One page is stored in local page caching;
The client parses the ending mark eof of the last one directory entry preserved in the first page stem With cookie values, judge whether the ending mark eof is 1, if so, the client reads the directory entry in the page, Otherwise the client is read catalogue to the server end asynchronous transmission and is asked according to the page call number and the cookie values It asks, pre-reads the lower one page for preserving the directory entry.
The asynchronous system pre-read of directory entry in the distributed file system further includes the client parsing current The directory entry of page, and it is shown to user.
The asynchronous system pre-read of directory entry in the distributed file system further includes when the client is resolved to institute State current page stem preserve the last one directory entry ending mark eof and cookie value when, if it is described the last one The ending mark eof of directory entry is 1, then need not pre-read the lower one page for preserving the directory entry, otherwise, the page call number adds 1, client one page under the local page cache lookup according to the page call number is if there is lower one page, then described Catalog request is read in client warp-wise server end asynchronous transmission, and otherwise the client is read to the server end asynchronous transmission Catalog request pre-reads the lower one page for preserving the directory entry.
The asynchronous system pre-read of directory entry in the distributed file system further includes the client in the local Page cache finds lower one page, checks the update flag bit of described lower one page, if the update flag bit set, institute It states client and receives the lower one page for preserving the directory entry, if the non-set of the update flag bit, the client continues It waits until that the server end returns to described lower one page, and updates one page of the local page caching.
The asynchronous system pre-read of directory entry in the distributed file system further includes the mesh for reading described one page Record item, until current page stem preserve the last one directory entry ending mark eof be 1, otherwise continue to the server It holds asynchronous transmission to read catalog request, pre-reads the lower one page for preserving the directory entry.
By invent above it is found that the advantage of the invention is that:
The asynchronous method and system pre-read of a kind of directory entry proposed by the present invention account in such a way that space exchanges the time for It is cached with a small amount of local page to preserve the asynchronous page pre-read, client when reading catalogue for the first time, in addition to the item that saves contents First page is synchronous acquisition, each page after second page be all one by one it is asynchronous pre-read and be stored in local page caching, mesh Record item can directly be found in local page caching when reading, and reduced directory entry and synchronized the expense obtained, reduce directory entry The execution time of reading improves the performance of directory entry reading.
Description of the drawings
Fig. 1 is the schematic diagram of the asynchronous pre- read apparatus of directory entry of the present invention;
Fig. 2 is the directory entry schematic illustration of tissue of the page in local page of the present invention caching;
Fig. 3 is the client flow diagram of the asynchronous pre-head method of directory entry of the present invention;
Fig. 4 is the asynchronous pre-head method server end flow diagram of directory entry of the present invention.
Wherein reference numeral is:
Step 1/2/3/4/5/6/7/8/9/10;
Step 31/32/33/34/35/36;
Step 41/42/43;
Step 51/52/53;
111 server module 222 of client modules
Client synchronization, which is sent, reads catalog request submodule 11
Client, which receives, reads catalog request return submodule 12
Customer terminal webpage cache sub-module 13
The client-cache page searches submodule 14
Client-cache page stem analyzing sub-module 15
Client directory item is asynchronous to pre-read triggering submodule 16
Catalog request submodule 17 is read in client asynchronous transmission
Client directory item analyzing sub-module 18
Client directory item display sub-module 19
Received server-side reads catalog request submodule 21
Server end directory entry searches submodule 22
Server end, which returns, reads catalog request submodule 23
Specific implementation mode
To solve the above-mentioned problems, present invention aims at providing in a kind of distributed file system, directory entry is asynchronous to be pre-read Method and system, can reduce directory entry reading the execution time, improve directory entry read performance.
In order to achieve the above object, the present invention propose directory entry in a kind of distributed file system it is asynchronous pre-read be System, including:
Client modules:The page for the item that saves contents is searched in local page caching;It synchronizes to send to server module and read Catalog request;Catalog request is read to server module asynchronous transmission;One page that server module returns is stored in local page Caching;Parse the information that stem preserves in page;Parse the directory entry information in page;By the directory entry presentation of information under catalogue to use Family.
Server module:It receives client modules and synchronizes the reading catalog request sent;Receive client modules asynchronous transmission Reading catalog request;The directory entry that client modules read catalog request is searched, the directory entry of client request is stored in the page In, fill necessary information in page stem;Return to one page that client reads catalog request.
The client modules specifically include:Client synchronization, which is sent, reads catalog request submodule, is used for server module Synchronous send reads catalog request, the directory entry under catalogue described in acquisition request;Client, which receives, reads catalog request return submodule, The one page for the item that saves contents for reading catalog request is returned for receiving server module;Customer terminal webpage cache sub-module, is used for Cache client, which receives, reads one page that catalog request returns to submodule acquisition, is stored in client local page caching;Client It caches the page and searches submodule, one page for the item that saves contents in client local page cache lookup;Client-cache page Kept man of a noblewoman portion analyzing sub-module, for parsing the information that stem preserves in one page that client local page caches;Client directory Item is asynchronous to pre-read triggering submodule, according to the header message that client-cache page stem analyzing sub-module parses, judges visitor Whether family end needs to read catalog request to server module asynchronous transmission, if it is desired, then triggers client asynchronous transmission and reads mesh Record request submodule;Catalog request submodule is read in client asynchronous transmission, is asked for reading catalogue to server module asynchronous transmission It asks, request pre-reads the lower one page for the item that saves contents;Client directory item analyzing sub-module, for parsing in client local page Directory entry information in page in caching;Client directory item display sub-module is used for all under user one by one Display directory Directory entry.
The server module specifically includes:Received server-side reads catalog request submodule, is sent for receiving client Reading catalog request;Server end directory entry searches submodule, for searching the mesh that client reads catalog request in server end Item is recorded, by directory entry information preservation in one page, while necessary information is preserved in the stem of this page;Server end, which returns, reads catalogue Submodule is asked, one page of catalog request is read for returning to client.
Client synchronization, which is sent, reads catalog request submodule:Transmission reading catalogue is synchronized for client to ask It asks, waiting for server end returns to the one page for reading catalog request could continue after client receives one page of server end return Execute following process.
Customer terminal webpage cache sub-module:It is received for client and reads catalog request return submodule, receive server end The one page for reading catalog request is returned, customer terminal webpage cache sub-module caches the page for preserving and returning in client local page, Every page is indexed with page call number, storing directory item in order in every page, and in every page of stem preserves the page the last one Ending mark eof and the cookie value of directory entry.Each directory entry is there are one ending mark eof, for judging that the directory entry is No is the last one directory entry under catalogue, if ending mark eof is 1, which is the last one directory entry under catalogue; If ending mark eof is 0, which is not the last one directory entry under catalogue.There are one cookie for each directory entry Value, for indicating the position of current directory item.
The client-cache page searches submodule:For client according to page call number local page caching in, search Save contents one page of item.
Client-cache page stem analyzing sub-module:It finds to be parsed in local page caching for client One page, the ending for the last one directory entry that the parsing of client-cache page stem analyzing sub-module preserves in the stem of this page Indicate eof and cookie values.
Client directory item is asynchronous to pre-read triggering submodule;Client-cache page stem analyzing sub-module parses one page Header message, client judges whether to need to trigger client according to the ending mark eof of the last one directory entry in this page Catalog request submodule is read in asynchronous transmission, pre-reads the lower one page for the item that saves contents, if its ending mark eof is 1, is not needed It triggers client asynchronous transmission and reads catalog request submodule;If it ends up mark, eof is 0, needs triggering client asynchronous It sends and reads catalog request submodule.
Catalog request submodule (key modules of the asynchronous pre- read apparatus of directory entry) is read in client asynchronous transmission:For client Current page call number is added 1 by end first, is then indexed according to the cookie values of the last one directory entry in current page and current page Number to server end asynchronous transmission read catalog request, pre-read the lower one page for the item that saves contents;It sends and reads different from client synchronization Catalog request submodule, catalog request submodule is read in client asynchronous transmission need not synchronize waiting for server end return reading catalogue One page of request, so that it may to continue to execute following directory entry resolving.
Client directory item analyzing sub-module:For client directory item analyzing sub-module, the cookie of directory entry is parsed The information such as value, inode number, title, file handle, parse first directory entry information in one page, and client directory item is aobvious Show that first directory entry is shown to user by submodule, then parse remaining directory entry in the page one by one, client directory item is aobvious Show that directory entry is shown to user to submodule one by one again.
Server end directory entry searches submodule:The directory entry that client reads catalog request is searched for server end, it will The directory entry information preservation of client request is in one page, while the last one directory entry in the stem of this page preserves this page Ending mark eof and cookie values.
The present invention also provides a kind of asynchronous method pre-read of directory entry of distributed file system, this method includes:
Step 1, client modules execute for the first time reads directory operation;
Step 2, client searches the first page for the item that saves contents according to page call number in local page caching, if not In the presence of then entering step 3, and if so, entering step 4;
Step 3, client modules, which synchronize to send to server end, reads catalog request, and waiting for server end returns to client One page of catalog request is read, server end is returned to the one page for reading catalog request, is stored in client native page by client modules In the caching of face;
Step 4, client finds the one page to be parsed in local page caching, first parses and preserves in the beginning of the page portion Ending mark eof and the cookie value of the last one directory entry in this page judges that the ending mark eof of the last one directory entry is No is 1, if ending mark eof is not 1, client according to the cookie values of page call number and the last one directory entry, to Catalog request is read in server end asynchronous transmission, pre-reads the lower one page for the item that saves contents;If ending mark eof is 1, client Catalog request need not be read to server end asynchronous transmission;
Step 5, client modules start to parse the first directory entry information in current page, by first mesh in the page Item presentation of information is recorded to user, then parses remaining directory entry in the page one by one, and is shown to user one by one;
Step 6, client is resolved to always the last one directory entry of current page, if its ending mark eof is 1, All directory entries have been run through under the catalogue, are read directory operation and are terminated;If not being 1, directory entry is not run through under the catalogue, then Enter step 7;
Step 7, page call number adds 1, client one page under client local page cache lookup according to page call number, such as Fruit has lower one page, then catalog request is read in client warp-wise server end asynchronous transmission;If not provided, i.e. client is not to clothes Catalog request is read in the end asynchronous transmission of business device, then client, which synchronizes to send, reads catalog request;
Step 8, client finds lower one page in local page caching, checks the update flag bit of this page, if this page Flag bit set is updated, i.e., client asynchronous transmission reading catalog request, pre-reads the item that saves contents in step 4 Lower one page returned, client receives the lower one page for the item that saves contents;If the non-set of update flag bit of this page, that is, walk Catalog request is read in rapid 4 client asynchronous transmission, and server end returns to one that client reads catalog request not yet Page, then client continues waiting for, until server end returns to one page of reading catalog request, the page in update caching;
Step 9, client finds updated page in local page caching, repeats step 4 to step 7, until reading one The ending mark eof of directory entry is 1, then client runs through all directory entries under the catalogue.
The step 3 further comprises following steps:
Step 31, client synchronization, which is sent, reads catalog request submodule, synchronizes to send to server module and reads catalog request, Obtain the one page for the item that saves contents;
Step 32, received server-side client reads catalog request submodule, receives the reading catalog request that client is sent;
Step 33, server end directory entry searches submodule, and the catalogue that client reads catalog request is searched in server end , by directory entry information preservation in one page, while the ending mark of the last one directory entry in this page is preserved in the stem of this page Will eof and cookie value;
Step 34, server end, which returns, reads catalog request submodule, and server end returns to client and reads catalog request One page;
Step 35, client, which receives, reads catalog request return submodule, and catalogue is read in client reception server end return asks The one page asked;
Step 36, customer terminal webpage cache sub-module, one page that server end is returned to reading catalog request are stored in local Page cache.
The step 4 further comprises following steps:
Step 41, client-cache page stem analyzing sub-module parses in the page preserved in the stem of current page Ending mark eof and the cookie value of the last one directory entry;
Step 42, client directory item is asynchronous pre-reads triggering submodule, according to the knot of the last one directory entry in current page Tail tag will eof judges whether ending mark eof is 1, if not 1, then preserves the cookie values of the last one directory entry, into Enter step 43;If it is 1, then client need not be to server end asynchronous transmission reading catalog request;
Step 43, catalog request submodule is read in client asynchronous transmission, and page call number adds 1, according to page call number and finally Catalog request is read in the cookie values of one directory entry, client asynchronous transmission.
The step 5 further comprises following steps:
Step 51, client directory item analyzing sub-module parses the first directory entry information in this page;
Step 52, client directory item display sub-module, by first directory entry presentation of information in the page to user;
Step 53, client directory item analyzing sub-module, then remaining directory entry, client directory in the page are parsed one by one Item display sub-module, is shown to user by directory entry one by one.
Mesh is read different from parallel network file system pNFS original directory entry reading process, client is sent Record request, server end return to the page of client storing directory item in order, in the asynchronous side of pre-reading of directory entry of the present invention In method, server end not only returns to the page of client storing directory item in order, but also can be preserved in the stem of the page should Ending mark eof and the cookie value of the last one directory entry in page.The last one catalogue in the page that page stem preserves The ending mark eof of item is client use to determine whether needing trigger catalog item is asynchronous to pre-read submodule, different to server end Step, which is sent, reads catalog request, pre-reads the lower one page for the item that saves contents.The last one directory entry in the page that page stem preserves Cookie values, be client be used for server end asynchronous transmission read catalog request.
Client synchronization sends to server end and reads catalog request in step 3, obtains the first page for the item that saves contents, client The information that end is preserved according to stem in first page, it can be determined that whether have second page, if there is second page, trigger step 4 client It holds to server end asynchronous transmission and reads catalog request, the asynchronous second page for pre-reading the item that saves contents withouts waiting for server end The second page of request is returned to, client starts to parse directory entry information in first page, and is shown to user one by one.Client parses And show that the asynchronous process for pre-reading second page of the process and client of directory entry in first page concurrently executes, client End runs through the directory entry in first page, and the second page pre-read in step 4 has returned and has been stored in client local page caching, so Second page directly is found in local page caching afterwards, need not synchronize to server end and send acquisition request second page and waiting Server end returns to the second page for reading catalog request, when reducing the time of synchronous waiting, that is, reducing directory entry reading Between, improve directory entry reading performance.If the catalogue has third page, after finding second page in local page caching, meeting The process that catalog request pre-reads third page is read in triggering step 4 client asynchronous transmission, and so on.
It is one embodiment of the invention below, as follows:
Fig. 1, which is that directory entry of the present invention is asynchronous, pre-reads system schematic, as shown in Figure 1, the system includes:
Client modules 1 search the page for the item that saves contents in local page caching;It synchronizes and sends to server module 2 Read catalog request;One page that server module 2 returns is stored in local page caching;Parse the information that stem preserves in page; Catalog request is read to 2 asynchronous transmission of server module;Parse the directory entry information in page;By the directory entry presentation of information under catalogue To user;
Wherein, client modules further include following module:
Client synchronization, which is sent, reads catalog request submodule 11, and catalog request is read for synchronizing to send to server module, Acquisition request saves contents one page of item;
Client, which receives, reads catalog request return submodule 12, and reading catalog request is returned for receiving server module One page;
Customer terminal webpage cache sub-module 13 receives for cache client and reads catalog request returns to submodule acquisition one Page is stored in local page caching;
The client-cache page searches submodule 14, one page for searching the item that saves contents in being cached in local page;
Client-cache page stem analyzing sub-module 15, for parsing stem in one page that client local page caches The information of preservation;
Client directory item is asynchronous to pre-read triggering submodule 16, is parsed according to client-cache page stem analyzing sub-module The header message gone out, judges whether client needs to read catalog request to server module asynchronous transmission, if it is desired, then triggers Catalog request submodule is read in client asynchronous transmission;
Catalog request submodule 17 is read in client asynchronous transmission, for reading catalog request to server module asynchronous transmission, Request pre-reads the lower one page for the item that saves contents;
Client directory item analyzing sub-module 18, for parsing the catalogue in the page that client local page caching preserves Item information;
Client directory item display sub-module 19 is used for all directory entries under user one by one Display directory;
Server module 2 receives client modules 1 and synchronizes the reading catalog request sent, receives 1 asynchronous hair of client modules The reading catalog request sent;The directory entry that client modules 1 read catalog request is searched, the directory entry of client request is stored in page In face, necessary information is filled in page stem;Return to one page that client 1 reads catalog request.
Wherein, server module 2 further includes following module:
Received server-side reads catalog request submodule 21, reading catalog request for receiving client synchronization transmission and different Walk the reading catalog request sent;
Server end directory entry searches submodule 22, for searching the catalogue that client reads catalog request in server end , by directory entry information preservation in one page, and the ending mark eof of the last one directory entry in this page is preserved in one page stem With cookie values;
Server end, which returns, reads catalog request submodule 23, for returning to the one page for reading catalog request to client;
The present invention also provides a kind of asynchronous pre-head method of the directory entry of distributed file system, Fig. 2 is that the present invention reads catalogue Asynchronous method client flow diagram, in conjunction with Fig. 2, this method comprises the following steps:
Step 1, client modules execute for the first time reads directory operation;
Step 2, client modules are searched according to page call number in local page caching, if there is the item that saves contents Page, if there is no then entering step 3, and if so, entering step 4;
Step 3, client modules, which synchronize to send to server end, reads catalog request, and waiting for server end returns to client One page of catalog request is read, server end is returned to the one page for reading catalog request by client modules, is stored in local page caching In;
Step 4, client finds the one page to be parsed in local page caching, first parses and preserves in the beginning of the page portion Ending mark eof and the cookie value of the last one directory entry in this page judges that the ending mark eof of the last one directory entry is No is 1, if ending mark eof is not 1, client according to the cookie values of page call number and the last one directory entry, to Catalog request is read in server end asynchronous transmission, pre-reads the lower one page for the item that saves contents;If ending mark eof is 1, client Catalog request need not be read to server end asynchronous transmission;
Step 5, client modules parse the first directory entry information in this page, by first directory entry in the page Presentation of information is to user, then parses remaining directory entry in the page one by one, and is shown to user one by one;
Step 6, client reads always the last one directory entry under this page, should if its ending mark eof is 1 All directory entries have been run through under catalogue, enter step 10;If not being 1, directory entry is not run through under the catalogue, is entered step 7;
Step 7, page call number adds 1, and client one page under local page cache lookup according to page call number is found next Page, checks the update flag bit of this page, if the update flag bit set of this page, i.e. user end to server in step 4 Asynchronous transmission is held to read catalog request, the lower one page for pre-reading the item that saves contents has returned, and client, which receives, to save contents under item One page enters step 9;If mesh is read in the non-set of update flag bit of this page, the i.e. asynchronous transmission of step 4 client Record request, server end return to one page of request, enter step 8 not yet;
Step 8, client continues waiting for, until client receives one page that server end returns to reading catalog request, set The update flag bit of this page, enters step 9;
Step 9, client finds updated page in local page caching, repeats step 4 to step 7, until a catalogue The ending mark eof of item is 1, then client runs through all directory entries under the catalogue;
Step 10, client reading catalogue terminates;
The step 3 further comprises following steps:
Step 31, client synchronization, which is sent, reads catalog request submodule 12, synchronous to be asked to server module transmission reading catalogue It asks, obtains the one page for the item that saves contents;
Step 32, received server-side client reads catalog request submodule 21, receives the reading catalogue that client is sent and asks It asks;
Step 33, server end directory entry searches submodule 22, and the directory entry of client request is searched in server end, and It is stored in one page;
Step 34, server end, which returns, reads catalog request submodule 23, and server end returns to client and reads catalog request One page;
Step 35, client, which receives, reads catalog request return submodule 12, and client receives server end return and reads catalogue One page of request;
Step 36, customer terminal webpage cache sub-module 13, one page that server end is returned to reading catalog request are stored in this Ground caches;
The step 4 further comprises following steps:
Step 41, client directory item analyzing sub-module 14 parses directory entry in current page, only parses directory entry one by one Cookie and ending mark eof, be resolved to the page the last one directory entry always;
Step 42, client judges whether the ending mark eof of the last one directory entry is 1, if it is not, then preserving most The cookie values of the latter directory entry, enter step 43;If it is, client need not read mesh to server end asynchronous transmission Record request;
Step 43, catalog request submodule 15 is read in client asynchronous transmission, and page call number adds 1, according to page call number and most Catalog request is read in the cookie values of the latter directory entry, client asynchronous transmission;
The step 5 further comprises following steps:
Step 51, client directory item analyzing sub-module 15 parses the first directory entry information in this page;
Step 52, client directory item display sub-module 17, by first directory entry presentation of information in the page to user;
Step 53, client directory item analyzing sub-module 15, then remaining directory entry in the page, client mesh are parsed one by one Item display sub-module 17 is recorded, directory entry is shown to user one by one;
The present invention also provides a kind of asynchronous pre-head method of the directory entry of distributed file system, Fig. 3 is that the present invention reads catalogue Asynchronous method server flow diagram, in conjunction with Fig. 3, this method includes:
Step 1, received server-side client synchronization or the reading catalog request of asynchronous transmission,
Step 2, server end searches the directory entry that client reads catalog request, by directory entry information preservation in one page, The ending mark eof and cookie values of the last one directory entry in this page are preserved in the stem of this page simultaneously;
Step 3, server end returns to one page that client reads catalog request.

Claims (6)

1. a kind of asynchronous method pre-read of directory entry in distributed file system, which is characterized in that including:
Step 1, client obtains the page call number of the directory entry, and institute in local page caching is searched according to the page call number State the first page of directory entry, and if it exists, then follow the steps 2, otherwise synchronize to send to server end and read catalog request, the service The first page is sent to the client by device end, and the first page is stored in local page caching by the client;
The step 2, the client parse the ending mark of the last one directory entry preserved in the first page stem Will eof and cookie values judge whether the ending mark eof is 1, if so, the client is read in the first page Directory entry, otherwise the client is according to the page call number and the cookie values, to the server end asynchronous transmission Catalog request is read, the lower one page for preserving the directory entry is pre-read;
Further include directory entry of the client parsing in current page, and is shown to user;
When the client be resolved to the current page stem preserve the last one directory entry ending mark eof and When cookie values, if the ending mark eof of the last one directory entry is 1, it need not pre-read and preserve the directory entry Lower one page, otherwise, the page call number adds 1, and the client is looked into according to the page call number in local page caching Lower one page is looked for, if there is lower one page, then catalog request is read in client warp-wise server end asynchronous transmission, otherwise the visitor Catalog request is read in family end to the server end asynchronous transmission, pre-reads the lower one page for preserving the directory entry.
2. the asynchronous method pre-read of directory entry in distributed file system as described in claim 1, which is characterized in that further include The client the local page caching find lower one page, check the update flag bit of described lower one page, if it is described more New flag bit set, then the client receives the lower one page for preserving the directory entry, if the update flag bit is not Set, then the client continue waiting for, until the server end returns to described lower one page, and it is slow to update the local page The one page deposited.
3. the asynchronous method pre-read of directory entry in distributed file system as claimed in claim 2, which is characterized in that further include The directory entry for reading described one page, until current page stem preserve the last one directory entry ending mark eof be 1, Otherwise continue to read catalog request to the server end asynchronous transmission, pre-read the lower one page for preserving the directory entry.
4. the asynchronous system pre-read of directory entry in a kind of distributed file system, which is characterized in that including:
Client modules obtain the page call number of the directory entry for client, and native page is searched according to the page call number The first page of directory entry described in the caching of face, and if it exists, the directory entry in the first page is then read, it is otherwise same to server end Step, which is sent, reads catalog request, and the first page is sent to the client by the server end, and the client is by described the One page is stored in local page caching;
The client parse the ending mark eof of the last one directory entry preserved in the first page stem with Cookie values judge whether the ending mark eof is 1, if so, the client reads the catalogue in the first page , otherwise the client reads catalogue according to the page call number and the cookie values to the server end asynchronous transmission Request, pre-reads the lower one page for preserving the directory entry;
Further include directory entry of the client parsing in current page, and is shown to user;
When the client be resolved to the current page stem preserve the last one directory entry ending mark eof and When cookie values, if the ending mark eof of the last one directory entry is 1, it need not pre-read and preserve the directory entry Lower one page, otherwise, the page call number adds 1, and the client is looked into according to the page call number in local page caching Lower one page is looked for, if there is lower one page, then catalog request is read in client warp-wise server end asynchronous transmission, otherwise the visitor Catalog request is read in family end to the server end asynchronous transmission, pre-reads the lower one page for preserving the directory entry.
5. the asynchronous system pre-read of directory entry in distributed file system as claimed in claim 4, which is characterized in that further include The client the local page caching find lower one page, check the update flag bit of described lower one page, if it is described more New flag bit set, then the client receives the lower one page for preserving the directory entry, if the update flag bit is not Set, then the client continue waiting for, until the server end returns to described lower one page, and it is slow to update the local page The one page deposited.
6. the asynchronous system pre-read of directory entry in distributed file system as claimed in claim 5, which is characterized in that further include The directory entry for reading described one page, until current page stem preserve the last one directory entry ending mark eof be 1, Otherwise continue to read catalog request to the server end asynchronous transmission, pre-read the lower one page for preserving the directory entry.
CN201510401114.1A 2015-07-09 2015-07-09 The asynchronous method and system pre-read of directory entry in a kind of distributed file system Active CN105138545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510401114.1A CN105138545B (en) 2015-07-09 2015-07-09 The asynchronous method and system pre-read of directory entry in a kind of distributed file system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510401114.1A CN105138545B (en) 2015-07-09 2015-07-09 The asynchronous method and system pre-read of directory entry in a kind of distributed file system

Publications (2)

Publication Number Publication Date
CN105138545A CN105138545A (en) 2015-12-09
CN105138545B true CN105138545B (en) 2018-10-09

Family

ID=54723895

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510401114.1A Active CN105138545B (en) 2015-07-09 2015-07-09 The asynchronous method and system pre-read of directory entry in a kind of distributed file system

Country Status (1)

Country Link
CN (1) CN105138545B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168649B (en) * 2017-05-05 2019-12-17 南京城市职业学院 method and device for data distribution in distributed storage system
CN107491545A (en) * 2017-08-25 2017-12-19 郑州云海信息技术有限公司 The catalogue read method and client of a kind of distributed memory system
CN110765086B (en) * 2019-10-25 2022-08-02 浪潮电子信息产业股份有限公司 Directory reading method and system for small files, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388824A (en) * 2008-10-15 2009-03-18 中国科学院计算技术研究所 File reading method and system under sliced memory mode in cluster system
CN102385623A (en) * 2011-10-25 2012-03-21 曙光信息产业(北京)有限公司 Catalogue access method in DFS (distributed file system)
CN102541985A (en) * 2011-10-25 2012-07-04 曙光信息产业(北京)有限公司 Organization method of client directory cache in distributed file system
CN103916465A (en) * 2014-03-21 2014-07-09 中国科学院计算技术研究所 Data pre-reading device based on distributed file system and method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4186456B2 (en) * 2001-11-28 2008-11-26 沖電気工業株式会社 Distributed file sharing system and control method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388824A (en) * 2008-10-15 2009-03-18 中国科学院计算技术研究所 File reading method and system under sliced memory mode in cluster system
CN102385623A (en) * 2011-10-25 2012-03-21 曙光信息产业(北京)有限公司 Catalogue access method in DFS (distributed file system)
CN102541985A (en) * 2011-10-25 2012-07-04 曙光信息产业(北京)有限公司 Organization method of client directory cache in distributed file system
CN103916465A (en) * 2014-03-21 2014-07-09 中国科学院计算技术研究所 Data pre-reading device based on distributed file system and method thereof

Also Published As

Publication number Publication date
CN105138545A (en) 2015-12-09

Similar Documents

Publication Publication Date Title
CN101201827B (en) Method and system for displaying web page
CN103020315B (en) A kind of mass small documents storage means based on master-salve distributed file system
CN104111804B (en) A kind of distributed file system
US9846711B2 (en) LSM cache
US6754799B2 (en) System and method for indexing and retrieving cached objects
CN106649349B (en) Data caching method, device and system for game application
CN104714965B (en) Static resource De-weight method, static resource management method and device
CN105183839A (en) Hadoop-based storage optimizing method for small file hierachical indexing
CN104679898A (en) Big data access method
CN110134648A (en) Log processing method, device, equipment, system and computer readable storage medium
CN104077310B (en) Load the method, apparatus and system of resource file
CN103595797B (en) Caching method for distributed storage system
CN104765840A (en) Big data distributed storage method and device
CN103294785B (en) A kind of packet-based metadata server cluster management method
CN110399348A (en) File deletes method, apparatus, system and computer readable storage medium again
CN105138545B (en) The asynchronous method and system pre-read of directory entry in a kind of distributed file system
CN108932332A (en) The loading method and device of static resource
CN109144413A (en) A kind of metadata management method and device
CN103118007A (en) Method and system of acquiring user access behavior
CN110784498B (en) Personalized data disaster tolerance method and device
CN104899161B (en) A kind of caching method of the continuous data protection based on cloud storage environment
CN102546674A (en) Directory tree caching system and method based on network storage device
CN104778229A (en) Telecommunication service small file storage system and method based on Hadoop
CN102722405A (en) Counting method in high concurrent and multithreaded application and system
CN103942301B (en) Distributed file system oriented to access and application of multiple data types

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant