CN110765086A - Directory reading method and system for small files, electronic equipment and storage medium - Google Patents

Directory reading method and system for small files, electronic equipment and storage medium Download PDF

Info

Publication number
CN110765086A
CN110765086A CN201911026031.3A CN201911026031A CN110765086A CN 110765086 A CN110765086 A CN 110765086A CN 201911026031 A CN201911026031 A CN 201911026031A CN 110765086 A CN110765086 A CN 110765086A
Authority
CN
China
Prior art keywords
directory
reading
fragments
fragment
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911026031.3A
Other languages
Chinese (zh)
Other versions
CN110765086B (en
Inventor
罗浩
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Langchao Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Langchao Electronic Information Industry Co Ltd filed Critical Langchao Electronic Information Industry Co Ltd
Priority to CN201911026031.3A priority Critical patent/CN110765086B/en
Publication of CN110765086A publication Critical patent/CN110765086A/en
Application granted granted Critical
Publication of CN110765086B publication Critical patent/CN110765086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files

Abstract

The application discloses a directory reading method for small files, which comprises the steps of sending a first directory pre-reading instruction to a plurality of metadata servers of a distributed file system so that each metadata server can transmit any number of directory fragments to a client for caching; the directory fragments are obtained by carrying out fragmentation processing on the total directory of all the small files; receiving a directory reading instruction, and reading the directory fragments in the client cache according to the directory reading instruction; and determining the currently read directory fragment, and sending a second directory pre-reading instruction to a metadata server where the currently read directory fragment is located so as to transmit the directory fragment which is not transmitted to the client cache. The method and the device can improve the reading speed of the small file directory. The application also discloses a directory reading system of the small files, electronic equipment and a storage medium, and the system has the beneficial effects.

Description

Directory reading method and system for small files, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and a system for reading a directory of a small file, an electronic device, and a storage medium.
Background
Small files refer to files from a few KB to tens of KB in size. For example, shopping websites need to store massive commodity pictures and search engines need to capture billions of web pages from the internet, all belonging to small files. In distributed storage applications, with the increasing amount of data, millions, or even billions of small file directories appear as single directories. In the related art, all the small file directories are stored in a single directory scene, and the reading rate of the small file directories is low due to the fact that the number of the small file directories is too large.
Therefore, how to increase the reading rate of the small file directory is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a method and a system for reading a small file directory, electronic equipment and a storage medium, which can improve the reading speed of the small file directory.
In order to solve the above technical problem, the present application provides a method for reading a directory of a small file, where the method for reading a directory of a small file includes:
sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system so that each metadata server transmits any number of directory fragments to a client cache; the directory fragments are obtained by carrying out fragmentation processing on the total directory of all the small files;
receiving a directory reading instruction, and reading the directory fragments in the client cache according to the directory reading instruction;
and determining the currently read directory fragment, and sending a second directory pre-reading instruction to a metadata server where the currently read directory fragment is located so as to transmit the directory fragment which is not transmitted to the client cache.
Optionally, before sending the first directory pre-read instruction to the plurality of metadata servers of the distributed file system, the method further includes:
and dividing the total directory of all the small files into N directory fragments, and storing the directory fragments to a plurality of metadata servers of the distributed file system.
Optionally, transmitting any number of directory fragments to the client cache by each metadata server includes:
and each metadata server randomly selects 1 directory fragment and transmits the directory fragment to the client cache.
Optionally, the transmitting the directory fragments that are not transmitted to the client cache includes:
and transmitting the next directory fragment of the currently read directory fragments to the client cache.
Optionally, the sending the first directory pre-read instruction to the plurality of metadata servers of the distributed file system includes:
and sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system in an asynchronous request mode.
Optionally, before reading the directory fragment in the client cache according to the directory reading instruction, the method further includes:
cache management instructions are generated such that the directory fragments are not freed in the LRU list.
Optionally, the method further includes:
judging whether the read directory fragments comprise all target directories corresponding to the target reading instruction or not;
if yes, returning a reading result.
The present application also provides a system for reading a directory of a small file, including:
the first-time pre-reading module is used for sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system so that each metadata server can transmit any number of directory fragments to a client cache; the directory fragments are obtained by carrying out fragmentation processing on the total directory of all the small files;
the directory reading module is used for receiving a directory reading instruction and reading the directory fragments in the client cache according to the directory reading instruction;
and the non-primary pre-reading module is used for determining the currently read directory fragment and sending a second directory pre-reading instruction to the metadata server where the currently read directory fragment is located so as to transmit the directory fragment which is not transmitted to the client cache.
The application also provides a storage medium, on which a computer program is stored, which when executed implements the steps executed by the directory reading method for the above small files.
The application also provides electronic equipment, which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the steps executed by the directory reading method of the small files when calling the computer program in the memory.
The application provides a directory reading method of small files, which comprises the following steps: sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system so that each metadata server transmits any number of directory fragments to a client cache; the directory fragments are obtained by carrying out fragmentation processing on the total directory of all the small files; receiving a directory reading instruction, and reading the directory fragments in the client cache according to the directory reading instruction; and determining the currently read directory fragment, and sending a second directory pre-reading instruction to a metadata server where the currently read directory fragment is located so as to transmit the directory fragment which is not transmitted to the client cache.
The method and the device have the advantages that the total directories of all the small files are subjected to fragmentation processing to obtain a plurality of directory fragments, and the directory fragments are stored to a plurality of metadata servers respectively. When a client needs to read a directory, a first directory pre-reading instruction is sent to a plurality of metadata servers, so that the metadata servers transmit a preset number of directory fragments to the client. The client may read the directory transmitted by the metadata server from the client cache. When the client reads the directory fragment, the client may continue to send a second directory pre-read instruction to the metadata server where the currently read directory fragment is located, so as to transmit the directory fragment that is not transmitted to the client cache. In the process, the reading of the client side for the total directory of the small files is converted into the reading for the single directory fragment, so that the time for the client side to read the directory of the small files is shortened, and the reading speed of the directory of the small files is improved. The application also provides a directory reading system of the small files, the electronic equipment and a storage medium, which have the beneficial effects and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present application, the drawings needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a flowchart of a method for reading a directory of a small file according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a fragmentation principle of a small file total directory according to an embodiment of the present application;
fig. 3 is a schematic view illustrating a browsing principle of a large number of small files in a distributed file system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a directory reading system for small files according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a method for reading a directory of a small file according to an embodiment of the present application.
The specific steps may include:
s101: sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system so that each metadata server transmits any number of directory fragments to a client cache;
in this embodiment, the total directory of all the small files may be fragmented in advance to obtain a plurality of directory fragments, and the directory fragments are stored in a plurality of metadata servers of the distributed file system, where the number of directory fragments is not limited, and the number of small files included in the total directory is positively correlated with the number of directory fragments. As a possible implementation manner, the present embodiment may set a maximum value of the number of small files included in each directory fragment, so as to perform fragment processing; in this embodiment, the number K of the directory fragments may also be determined according to the number of the small files included in the total directory, and then the total directory is divided into K directory fragments, where the difference in the number of the small files between any two directory fragments is smaller than a fixed value.
The small file mentioned in this embodiment refers to a file with less than 100KB bytes in a distributed file system, and a plurality of metadata servers may exist in the distributed file system, and each metadata server may store a plurality of directory fragments. As a possible implementation manner, the directory fragments stored on the metadata server are multiple directory fragments adjacent to each other in the total directory, for example, the total directory of the small file is ABCDEFG, the total directory is divided into four directory fragments, where fragment 1 is AB, fragment 2 is CD, fragment 3 is EF, and fragment 4 is G, and if the directory fragments are stored in metadata server a and metadata server B, fragment 1 and fragment 2 may be stored in metadata server a, and fragment 3 and fragment 4 may be stored in metadata server B.
The executing main body in this embodiment may be a client connected to the distributed file system, and when the client needs to read a small file directory, the client may first send a first directory pre-reading instruction to a plurality of metadata servers in the distributed file system, where directory fragments are stored, so that the metadata servers transmit the directory fragments to a client cache for pre-reading. After receiving the first directory pre-reading instruction, the metadata server may transmit a preset number of directory fragments to the client cache, so that the client reads the directory fragments from the cache. The number of directory fragments transmitted by the metadata server after receiving the first directory read-ahead instruction is not limited in this embodiment, and the preset number may be preferably set to 1, so as to reduce the data transmission amount. It should be noted that the sending of the first directory and the read instruction in this step may be suitable for the case where the client reads the small file directory for the first time.
S102: receiving a directory reading instruction, and reading the directory fragments in the client cache according to the directory reading instruction;
the directory pre-reading instruction mentioned in this step is a reading instruction sent by a user to the client, and after receiving the directory reading instruction, the client can read the directory fragments stored in the client cache according to the directory reading instruction.
It should be noted that the client may determine the small file to be read, that is, the target directory, according to the directory read instruction, and if the directory fragments read from the client cache include all target directories corresponding to the target read instruction, the read result may be directly returned and the flow may be received.
S103: and determining the currently read directory fragment, and sending a second directory pre-reading instruction to a metadata server where the currently read directory fragment is located so as to transmit the directory fragment which is not transmitted to the client cache.
The step may be performed in the process of reading the directory fragments in the client cache, and since the metadata server performs transmission with one directory fragment as a granularity, in the process of reading the directory fragments in the client cache, a second directory pre-reading instruction may be sent to the metadata server where the currently read directory fragment is located, so that the directory fragments cached by the client are transmitted to the client cache. By the method, the metadata server can continuously transmit the directory fragments when the directory fragments in the client cache are read. As a feasible implementation manner, the directory fragment transmitted by the metadata server in this embodiment may be a next directory fragment that is not transmitted to the client in the preset reading sequence of the current read directory fragment. For example, in the metadata server, the catalog fragment includes a fragment 1, a fragment 2, a fragment 3, and a fragment 4, the fragment 1 and the fragment 2 are transmitted to the client for caching after receiving the first catalog pre-reading instruction, and when it is detected that the client reads the fragment 1, the second catalog pre-reading instruction may be sent to the cloud data server, so that the metadata server transmits the fragment 3 to the client for caching.
In this embodiment, the total directory of all the small files is fragmented to obtain a plurality of directory fragments, and the directory fragments are stored in the plurality of metadata servers respectively. When a client needs to read a directory, a first directory pre-reading instruction is sent to a plurality of metadata servers, so that the metadata servers transmit a preset number of directory fragments to the client. The client may read the directory transmitted by the metadata server from the client cache. When the client reads the directory fragment, the client may continue to send a second directory pre-read instruction to the metadata server where the currently read directory fragment is located, so as to transmit the directory fragment that is not transmitted to the client cache. In the process, the reading of the client side for the total directory of the small files is converted into the reading for the single directory fragment, so that the time for the client side to read the directory of the small files is shortened, and the reading speed of the directory of the small files is improved.
As a further addition to the corresponding embodiment of fig. 1, before sending the first directory read-ahead instruction to the multiple metadata servers of the distributed file system, the total directory of all the small files may be further divided into N directory fragments, and the directory fragments are stored to the multiple metadata servers of the distributed file system. Referring to fig. 2, fig. 2 is a schematic diagram illustrating a slicing principle of a small file total directory according to an embodiment of the present application. For example, a single directory of 100 ten thousand small files may be divided into 128 directory fragments, each of which has about 7812 small files, and the directory fragments are stored in 8 metadata servers, and each metadata server stores 16 directory fragments.
As a further supplement to the embodiment corresponding to fig. 1, in the embodiment corresponding to fig. 1, a process of transmitting any number of directory fragments to the client cache by each metadata server may specifically be: and each metadata server randomly selects 1 directory fragment and transmits the directory fragment to the client cache. Correspondingly, after the metadata server receives the second directory pre-reading instruction, the next directory fragment of the directory fragments currently read by the client can be transmitted to the client for caching. After the client sends the first pre-reading instruction, the operation that the plurality of metadata servers transmit the directory fragments to the client cache simultaneously can exist, so that the data transmission amount can be reduced by randomly selecting 1 directory fragment from each metadata server to transmit to the client cache, and the operation of other services is not influenced. When the client reads the directory fragment in any client cache, the metadata server where the currently read directory fragment is located can transmit the next directory fragment of the currently read directory fragment to the client cache, so as to realize continuous reading of the client for the directory fragment. For example, in the metadata server, the segment 1, the segment 2, the segment 3, and the segment 4 are stored in the directory segment, the segment 1 is transmitted to the client cache after receiving the first directory pre-reading instruction, and when it is detected that the client reads the segment 1, a second directory pre-reading instruction may be sent to the cloud data server, so that the metadata server transmits the segment 2 to the client cache.
As a further addition to the corresponding embodiment of fig. 1, the process of sending the first directory pre-read instruction to the plurality of metadata servers of the distributed file system in S101 may be: and sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system in an asynchronous request mode. The mode is equivalent to the asynchronous request processing of small file pre-reading, and the request can be directly quitted without waiting after being sent to the metadata server. And after receiving the returned result of the asynchronous request, the client releases the requested memory. Further, before the directory fragment in the client cache is read according to the directory read instruction, a cache management instruction may be generated, so that the directory fragment is not released in an LRU (least recently used) list.
The flow described in the above embodiment is explained below by an embodiment in practical use. Referring to fig. 3, fig. 3 is a schematic view illustrating a browsing principle of a large number of small files in a distributed file system according to an embodiment of the present application. In fig. 3, ls indicates a directory read operation, cache is a client cache, client is a client, MDS0, MDS1 and MDS3 are metadata servers, fetch indicates a read operation, and return indicates a fragment directory return operation.
A large directory of M thousands of small files is assumed to be divided into 2n pieces (2n is larger than or equal to the minimum value of M), each piece has about M/2n files, and the files are exported to different metadata servers (MDS) respectively. The client side obtains the file from the metadata server, each fragment sends a Readdir request once, and 256 Readdir requests are required to be called totally. When a client acquires a file for the first time, metadata pre-reading is sent to each metadata server once, a directory fragment on each metadata server is read randomly, and the MDS returns a pre-reading result to the client for caching. When the client side obtains the file, the client side directly traverses the cache region, obtains data from the cache region, returns the data if the data of one fragment is full, and otherwise waits for the pre-reading result to return. The client cache has a plurality of directory fragments, and when one directory fragment is read for the first time, the next pre-reading request of the same MDS is sent.
In the embodiment, the asynchronous request processing of metadata pre-reading is added, and the request is directly exited without waiting after being sent to the MDS. And after receiving the returned result of the asynchronous request, the client releases the requested memory. And adding an operation type of a metadata pre-reading request at the client, and performing the same processing with Readdir at the MDS side.
After each directory fragment is returned from the MDS, the client cannot obtain the directory fragment immediately, and it is necessary to ensure that the client is not released in the LRU before obtaining the portion of data. When the client reads each fragment, the client firstly obtains the fragment from the client cache, and if the current fragment is not complete or empty, the client obtains the fragment from the buffer. After the directory is successfully read once, if the directory is required to be completely obtained from the cache, the cache of the client side is at least set to be more than M ten thousand.
The present embodiment divides a real large-amount directory into a plurality of small-fragmented directories. When the client performs ls operation (i.e. directory read operation) on the real directory, the real directory is converted into a directory file for reading the small fragments, and the plurality of small fragment directories which are cut into the small fragment directories are distributed on different MDSs. When the client side carries out ls on the real directory, the small fragment directory files on the MDS are read in an asynchronous mode. When the fragments on the MDS are accessed for the first time, the metadata information on the MDS is pre-read into the cache of the client side in a pre-reading mode. When the client accesses the fragment, the client accesses the data in the client cache firstly, and waits for the pre-reading return if the data is not complete. When a slice on one MDS is accessed each time, the pre-reading of metadata in the next slice continues. According to the embodiment, the browsing time of massive files is prolonged, and the access performance of a service site is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a directory reading system for small files according to an embodiment of the present application;
the system may include:
a first pre-reading module 100, configured to send a first directory pre-reading instruction to multiple metadata servers of a distributed file system, so that each metadata server transmits any number of directory fragments to a client cache; the directory fragments are obtained by carrying out fragmentation processing on the total directory of all the small files;
the directory reading module 200 is configured to receive a directory reading instruction, and read a directory fragment in the client cache according to the directory reading instruction;
the non-primary pre-reading module 300 is configured to determine a currently read directory fragment, and send a second directory pre-reading instruction to a metadata server where the currently read directory fragment is located, so as to transmit the directory fragment that is not transmitted to the client cache.
In this embodiment, the total directory of all the small files is fragmented to obtain a plurality of directory fragments, and the directory fragments are stored in the plurality of metadata servers respectively. When a client needs to read a directory, a first directory pre-reading instruction is sent to a plurality of metadata servers, so that the metadata servers transmit a preset number of directory fragments to the client. The client may read the directory transmitted by the metadata server from the client cache. When the client reads the directory fragment, the client may continue to send a second directory pre-read instruction to the metadata server where the currently read directory fragment is located, so as to transmit the directory fragment that is not transmitted to the client cache. In the process, the reading of the client side for the total directory of the small files is converted into the reading for the single directory fragment, so that the time for the client side to read the directory of the small files is shortened, and the reading speed of the directory of the small files is improved.
Further, the method also comprises the following steps:
and the fragmentation module is used for dividing the total directory of all the small files into N directory fragments and storing the directory fragments to a plurality of metadata servers of the distributed file system.
Further, the metadata server is configured to randomly select 1 directory fragment to be transmitted to the client cache after receiving the first directory pre-reading instruction; and the client side is further used for transmitting the next directory fragment of the currently read directory fragments to the client side cache after receiving a second directory pre-reading instruction.
Further, the first-time pre-reading module 100 is specifically a module for sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system by means of an asynchronous request.
Further, the method also comprises the following steps:
and the cache management module is used for generating a cache management instruction before the directory fragments in the client cache are read according to the directory reading instruction, so that the directory fragments are not released in the LRU list.
Further, the method also comprises the following steps:
the judging module is used for judging whether the read directory fragments comprise all target directories corresponding to the target reading instruction or not; if yes, returning a reading result.
Since the embodiment of the system part corresponds to the embodiment of the method part, the embodiment of the system part is described with reference to the embodiment of the method part, and is not repeated here.
The present application also provides a storage medium having a computer program stored thereon, which when executed, may implement the steps provided by the above-described embodiments. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The application further provides an electronic device, which may include a memory and a processor, where the memory stores a computer program, and the processor may implement the steps provided by the foregoing embodiments when calling the computer program in the memory. Of course, the electronic device may also include various network interfaces, power supplies, and the like.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A directory reading method for small files is characterized by comprising the following steps:
sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system so that each metadata server transmits any number of directory fragments to a client cache; the directory fragments are obtained by carrying out fragmentation processing on the total directory of all the small files;
receiving a directory reading instruction, and reading the directory fragments in the client cache according to the directory reading instruction;
and determining the currently read directory fragment, and sending a second directory pre-reading instruction to a metadata server where the currently read directory fragment is located so as to transmit the directory fragment which is not transmitted to the client cache.
2. The directory read method of claim 1, further comprising, prior to sending the first directory pre-read instruction to a plurality of metadata servers of the distributed file system:
and dividing the total directory of all the small files into N directory fragments, and storing the directory fragments to a plurality of metadata servers of the distributed file system.
3. The directory read method of claim 1, wherein transmitting any number of directory fragments to a client cache by each metadata server comprises:
and each metadata server randomly selects 1 directory fragment and transmits the directory fragment to the client cache.
4. The directory read method of claim 3, wherein the transmitting the directory fragments that are not transmitted to the client cache comprises:
and transmitting the next directory fragment of the currently read directory fragments to the client cache.
5. The directory read method of claim 1, wherein sending a first directory read-ahead instruction to a plurality of metadata servers of a distributed file system comprises:
and sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system in an asynchronous request mode.
6. The directory read method according to claim 1, further comprising, before reading the directory fragment in the client cache according to the directory read instruction:
cache management instructions are generated such that the directory fragments are not freed in the LRU list.
7. The directory read method according to any one of claims 1 to 6, further comprising:
judging whether the read directory fragments comprise all target directories corresponding to the target reading instruction or not;
if yes, returning a reading result.
8. A directory reading system for small files, comprising:
the first-time pre-reading module is used for sending a first directory pre-reading instruction to a plurality of metadata servers of the distributed file system so that each metadata server can transmit any number of directory fragments to a client cache; the directory fragments are obtained by carrying out fragmentation processing on the total directory of all the small files;
the directory reading module is used for receiving a directory reading instruction and reading the directory fragments in the client cache according to the directory reading instruction;
and the non-primary pre-reading module is used for determining the currently read directory fragment and sending a second directory pre-reading instruction to the metadata server where the currently read directory fragment is located so as to transmit the directory fragment which is not transmitted to the client cache.
9. An electronic device, characterized by comprising a memory in which a computer program is stored and a processor which, when calling the computer program in the memory, implements the steps of the directory reading method of a doclet according to any one of claims 1 to 7.
10. A storage medium, characterized in that it stores therein computer-executable instructions which, when loaded and executed by a processor, implement the steps of a directory reading method for small files as claimed in any one of claims 1 to 7.
CN201911026031.3A 2019-10-25 2019-10-25 Directory reading method and system for small files, electronic equipment and storage medium Active CN110765086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911026031.3A CN110765086B (en) 2019-10-25 2019-10-25 Directory reading method and system for small files, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911026031.3A CN110765086B (en) 2019-10-25 2019-10-25 Directory reading method and system for small files, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110765086A true CN110765086A (en) 2020-02-07
CN110765086B CN110765086B (en) 2022-08-02

Family

ID=69333782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911026031.3A Active CN110765086B (en) 2019-10-25 2019-10-25 Directory reading method and system for small files, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110765086B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625514A (en) * 2020-05-22 2020-09-04 浪潮电子信息产业股份有限公司 Metadata management and control method, device, equipment and storage medium
CN113760853A (en) * 2021-08-16 2021-12-07 联想凌拓科技有限公司 Directory processing method, server and storage medium
CN113986838A (en) * 2021-12-28 2022-01-28 成都云祺科技有限公司 Mass small file processing method and system based on file system and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196929A (en) * 2007-12-29 2008-06-11 中国科学院计算技术研究所 Metadata management method for splitting name space
CN101673271A (en) * 2008-09-09 2010-03-17 青岛海信传媒网络技术有限公司 Distributed file system and file sharding method thereof
CN102541984A (en) * 2011-10-25 2012-07-04 曙光信息产业(北京)有限公司 File system of distributed type file system client side
CN103179185A (en) * 2012-12-25 2013-06-26 中国科学院计算技术研究所 Method and system for creating files in cache of distributed file system client
CN103176754A (en) * 2013-04-02 2013-06-26 浪潮电子信息产业股份有限公司 Reading and storing method for massive amounts of small files
CN103916465A (en) * 2014-03-21 2014-07-09 中国科学院计算技术研究所 Data pre-reading device based on distributed file system and method thereof
CN104965845A (en) * 2014-12-30 2015-10-07 浙江大华技术股份有限公司 Small file positioning method and system
CN105138545A (en) * 2015-07-09 2015-12-09 中国科学院计算技术研究所 Method and system for asynchronously pre-reading directory entries in distributed file system
CN105183839A (en) * 2015-09-02 2015-12-23 华中科技大学 Hadoop-based storage optimizing method for small file hierachical indexing
CN107066503A (en) * 2017-01-05 2017-08-18 郑州云海信息技术有限公司 The method and device of magnanimity metadata burst distribution
WO2017173844A1 (en) * 2016-04-05 2017-10-12 浪潮电子信息产业股份有限公司 Directory reading method, apparatus and system
CN107491545A (en) * 2017-08-25 2017-12-19 郑州云海信息技术有限公司 The catalogue read method and client of a kind of distributed memory system
CN107562757A (en) * 2016-07-01 2018-01-09 阿里巴巴集团控股有限公司 Inquiry, access method based on distributed file system, apparatus and system
CN109002503A (en) * 2018-06-29 2018-12-14 郑州云海信息技术有限公司 A kind of metadata read method, device, equipment and readable storage medium storing program for executing

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196929A (en) * 2007-12-29 2008-06-11 中国科学院计算技术研究所 Metadata management method for splitting name space
CN101673271A (en) * 2008-09-09 2010-03-17 青岛海信传媒网络技术有限公司 Distributed file system and file sharding method thereof
CN102541984A (en) * 2011-10-25 2012-07-04 曙光信息产业(北京)有限公司 File system of distributed type file system client side
CN103179185A (en) * 2012-12-25 2013-06-26 中国科学院计算技术研究所 Method and system for creating files in cache of distributed file system client
CN103176754A (en) * 2013-04-02 2013-06-26 浪潮电子信息产业股份有限公司 Reading and storing method for massive amounts of small files
CN103916465A (en) * 2014-03-21 2014-07-09 中国科学院计算技术研究所 Data pre-reading device based on distributed file system and method thereof
CN104965845A (en) * 2014-12-30 2015-10-07 浙江大华技术股份有限公司 Small file positioning method and system
CN105138545A (en) * 2015-07-09 2015-12-09 中国科学院计算技术研究所 Method and system for asynchronously pre-reading directory entries in distributed file system
CN105183839A (en) * 2015-09-02 2015-12-23 华中科技大学 Hadoop-based storage optimizing method for small file hierachical indexing
WO2017173844A1 (en) * 2016-04-05 2017-10-12 浪潮电子信息产业股份有限公司 Directory reading method, apparatus and system
CN107562757A (en) * 2016-07-01 2018-01-09 阿里巴巴集团控股有限公司 Inquiry, access method based on distributed file system, apparatus and system
CN107066503A (en) * 2017-01-05 2017-08-18 郑州云海信息技术有限公司 The method and device of magnanimity metadata burst distribution
CN107491545A (en) * 2017-08-25 2017-12-19 郑州云海信息技术有限公司 The catalogue read method and client of a kind of distributed memory system
CN109002503A (en) * 2018-06-29 2018-12-14 郑州云海信息技术有限公司 A kind of metadata read method, device, equipment and readable storage medium storing program for executing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
INSUNG KANG ET AL.: ""Multi-Node Global Directory Construction in Peer-to-Peer Systems"", 《INTERNATIONAL CONFERENCE ON ADVANCED INFORMATION NETWORKING AND APPLICATIONS》 *
谢莉祥: ""分布式文件系统元数据存储技术研究"", 《万方数据知识服务平台》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625514A (en) * 2020-05-22 2020-09-04 浪潮电子信息产业股份有限公司 Metadata management and control method, device, equipment and storage medium
CN111625514B (en) * 2020-05-22 2022-06-10 浪潮电子信息产业股份有限公司 Metadata management and control method, device, equipment and storage medium
CN113760853A (en) * 2021-08-16 2021-12-07 联想凌拓科技有限公司 Directory processing method, server and storage medium
CN113760853B (en) * 2021-08-16 2024-02-20 联想凌拓科技有限公司 Directory processing method, server and storage medium
CN113986838A (en) * 2021-12-28 2022-01-28 成都云祺科技有限公司 Mass small file processing method and system based on file system and storage medium

Also Published As

Publication number Publication date
CN110765086B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN110765086B (en) Directory reading method and system for small files, electronic equipment and storage medium
JP6517263B2 (en) System, method and storage medium for improving access to search results
JP6073366B2 (en) Application driven CDN pre-caching
CN104424199B (en) searching method and device
JP6195098B2 (en) File reading method, storage device, and reading system
JP5826266B2 (en) Method and apparatus for handling nested fragment caching of web pages
CN107197359B (en) Video file caching method and device
CN106339508B (en) WEB caching method based on paging
WO2015078231A1 (en) Method for generating webpage template and server
JP4202129B2 (en) Method and apparatus for prefetching referenced resources
CN101719936A (en) Method, device and cache system for providing file downloading service
JP2007510224A (en) A method for determining the segment priority of multimedia content in proxy cache
CN106933965B (en) Method for requesting static resource
CN103051706A (en) Dynamic webpage request processing system and method for dynamic website
US10007731B2 (en) Deduplication in search results
CN111273863B (en) Cache management
CN107015978B (en) Webpage resource processing method and device
CN101247405A (en) Method, system and device for calculating download time and resource downloading
CN113127420B (en) Metadata request processing method, device, equipment and medium
KR102042431B1 (en) Method for recording metadata for web caching in cloud environment and web server using the same
CN113411364A (en) Resource acquisition method and device and server
JP5261326B2 (en) Information search device and information search program
Chen et al. Exploiting FastDFS client-based small file merging
JP5444785B2 (en) Document conversion apparatus, document distribution system, document conversion method, and document conversion program
Serbinski et al. Improving the delivery of multimedia embedded in web pages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant