CN113485971B - Method and device for using cache setting of file system - Google Patents

Method and device for using cache setting of file system Download PDF

Info

Publication number
CN113485971B
CN113485971B CN202110676550.5A CN202110676550A CN113485971B CN 113485971 B CN113485971 B CN 113485971B CN 202110676550 A CN202110676550 A CN 202110676550A CN 113485971 B CN113485971 B CN 113485971B
Authority
CN
China
Prior art keywords
file
cache
unit
life cycle
resource file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110676550.5A
Other languages
Chinese (zh)
Other versions
CN113485971A (en
Inventor
胡文
黄金华
于嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ASR Microelectronics Co Ltd
Original Assignee
ASR Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ASR Microelectronics Co Ltd filed Critical ASR Microelectronics Co Ltd
Priority to CN202110676550.5A priority Critical patent/CN113485971B/en
Publication of CN113485971A publication Critical patent/CN113485971A/en
Application granted granted Critical
Publication of CN113485971B publication Critical patent/CN113485971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/14Details of searching files based on file metadata
    • G06F16/148File search processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a cache setting using method of a file system. And setting a second cache space in the file system. Traversing and searching a logic address of a file header of the resource file in the second cache space according to the path and the file name of the input resource file; every time a lookup is traversed in the second cache space, the lifecycle values of all second cache units are reduced by a fixed frequency weight value. And traversing and searching the logical address of the file header of the resource file in the file system according to the path and the file name of the input resource file. Storing the query result in a second buffer unit with the minimum life cycle value, specifically comprising the steps of: the path and the file name of the resource file, the logical address of the file header of the resource file, the time overhead value of the resource file, and the initial value of the life cycle value of the second buffer unit. The method and the device improve query efficiency.

Description

Method and device for using cache setting of file system
Technical Field
The application relates to a cache setting and using method in a file system of electronic equipment.
Background
The GUI (graphical user interface ) provided by the electronic device typically supports a sliding effect, i.e. when a user's finger slides on the touch screen of the electronic device, the GUI also follows the sliding together. For example, the user slides a finger to the left, the current GUI fades away following the left slide, while the new GUI also fades away following the left slide until the new GUI is fully displayed. During the sliding of the GUI, the coordinate position of the picture in the GUI may change. Every time the GUI is refreshed, the file format and binary data of the picture in the GUI need to be read, and the picture is re-rendered according to the new coordinate position. If the electronic equipment adopts a file system to store the picture, a searching method provided by the file system is needed to find the file head of the picture in the GUI from the head traversal so as to obtain the head address of binary data of the picture; and then opening the picture file, acquiring binary data of the picture, and acquiring the file format of the picture from the binary data of the picture.
Various resource files for the GUI, including picture files, audio files, video files, etc., are saved in the file system of the electronic device. Referring to fig. 1, the conventional method for using a cache setting of a file system includes the following steps.
Step S11: and traversing and searching the logical address of the file header of the resource file in the file system according to the path and the file name of the input resource file.
Step S12: and setting a first cache space in the file system, wherein the first cache space is in a linked list form. Each node in the linked list is called a cache block, and each cache block contains a plurality of first cache units. Each first buffer unit records a logical address of a header of a resource file and an offset address of a current read pointer of the resource file. However, not all resource files have corresponding first cache units, and only an opened resource file has a corresponding first cache unit. Each resource file has a read pointer, and the offset address of the current read pointer of the resource file refers to the offset between the last time the resource file was opened (read) and the first address of the resource file. If the resource file has not been opened, the offset address of the current read pointer of the resource file is zero.
Step S13: and traversing each first cache unit in each cache block from the linked list head in the first cache space according to the logic address of the file head of the input resource file.
If the offset address of the current read pointer of the resource file is found in a certain first cache unit, the first cache unit is set to the state being accessed. Meanwhile, the logical address of the file header of the resource file is used for obtaining the head address of binary data of the resource file; and adding the offset address of the current read pointer of the resource file to the head address of the binary data of the resource file to obtain the initial address of the binary data of the current access to the resource file, and reading the binary data of the required resource file in a file system from the initial address.
If the offset address of the current read pointer of the resource file cannot be found in the first cache space, the logical address of the file header of the resource file and the offset address (with an initial value of zero) of the current read pointer of the resource file are newly recorded in the available first cache unit. Meanwhile, the logical address of the file header of the resource file is used for obtaining the head address of binary data of the resource file; and adding the offset address of the current read pointer of the resource file to the head address of the binary data of the resource file to obtain the initial address of the binary data of the current access to the resource file, and reading the binary data of the required resource file in a file system from the initial address. The first cache location in the non-access state is the available first cache location. If all the first cache units in the first cache space are in the accessing state, a new cache block is applied, and all the first cache units in the new cache block are in the non-accessing state and are available first cache units.
Step S14: when a certain resource file is closed, a first cache unit corresponding to the resource file is found in the first cache space, and the first cache unit is set to be in a non-access state. When all first cache units in a certain cache block of the first cache space are in a non-access state, deleting the cache block from the linked list, and simultaneously releasing the space of the cache block.
Electronic devices typically employ flash memory (flash) as a non-volatile memory, with an SPI interface (Serial Peripheral Interface, serial external design interface). In step S11, the data needs to be read through the SPI interface, and the reading speed is slower than that of the direct-reading memory. Step S11 is very time-consuming when accessing the resource file more frequently.
Disclosure of Invention
The technical problem to be solved by the application is to provide a cache setting and using method of a file system, which can overcome the time-consuming bottleneck when the file system frequently accesses resource files.
In order to solve the technical problems, the application provides a cache setting and using method of a file system, which comprises the following steps. Step S21: setting a second cache space in the file system, wherein the second cache space comprises a plurality of second cache units; each second cache unit records a path and a file name of a resource file, a logic address of a file header of the resource file, a time overhead value of the resource file and a life cycle value of the second cache unit; the time overhead value of the resource file refers to the time spent for finding the file header of the resource file in the file system; the life cycle value of the second cache unit is used for representing the data storage life of the second cache unit; the larger the life cycle value is, the longer the data in the second buffer unit is relatively stored; and vice versa. Step S22: traversing and searching a logic address of a file header of the resource file in the second cache space according to the path and the file name of the input resource file; every time a lookup is traversed in the second cache space, the lifecycle values of all second cache units are reduced by a fixed frequency weight value. If the logical address of the file header of the resource file is found in a certain second cache unit, the second cache unit is called a matched second cache unit, and the new life cycle value of the matched second cache unit=the time overhead value of the resource file×the fixed time weight value. If the logical address of the header of the resource file is not found in all the second buffer units, the process proceeds to step S23. Step S23: and traversing and searching the logical address of the file header of the resource file in the file system according to the path and the file name of the input resource file. Step S24: storing the query result in step S23 in a second buffer unit with the minimum life cycle value, specifically including storing: the path and the file name of the resource file, the logical address of the file header of the resource file, the time overhead value of the resource file, and the initial value of the life cycle value of the second buffer unit.
Further, in the step S21, the higher the frequency of use of a certain second cache unit, the greater the query time overhead, the greater the life cycle value of the second cache unit; and vice versa.
Further, in the step S21, the capacity of the second buffer space is fixed, and includes a fixed number of second buffer units.
Further, in the step S21, when the second buffer space is insufficient, the second buffer unit with the minimum life cycle value is selected to update the record content.
Further, in the step S22, if the value of the native life cycle of a certain second cache unit-the fixed frequency weight value < the minimum value of the life cycle value, the native life cycle value of the second cache unit is kept unchanged.
Further, in the step S22, if the time overhead value x the fixed time weight value of the resource file > the maximum value of the life cycle value, the new life cycle value of the matched second cache unit=the maximum value of the life cycle value.
Further, in step S22, the life cycle value of the second cache unit which is not always matched is continuously reduced in each searching process; the life cycle value of the second cache unit matched in a certain searching process is reset to be positive; this results in a lower life cycle value for the second cache unit in which the resource file is located with a lower frequency of use, and a lower life cycle value for the second cache unit in which the resource file is located with a lower time overhead value for the same frequency of use of the resource file.
Further, in the step S24, if there are unused second cache units, the life cycle value of the unused second cache unit is smaller than that of any used second cache units, so that the second cache unit with the smallest life cycle value is selected, and an unused second cache unit is selected.
Further, in the step S24, if all the second cache units are used and there are a plurality of second cache units with the smallest life cycle value, the second cache unit with the smallest life cycle value that is queried first is selected.
The application also provides a buffer setting and using device of the file system, which comprises a second buffer setting unit, a second buffer searching unit, a file system searching unit and a second buffer storing unit. The second cache setting unit is used for setting a second cache space in the file system, and the second cache space comprises a plurality of second cache units; each second cache unit records a path and a file name of a resource file, a logic address of a file header of the resource file, a time overhead value of the resource file and a life cycle value of the second cache unit; the time overhead value of the resource file refers to the time spent for finding the file header of the resource file in the file system; the life cycle value of the second cache unit is used for representing the data storage life of the second cache unit; the larger the life cycle value is, the longer the data in the second buffer unit is relatively stored; and vice versa. The second cache searching unit is used for traversing and searching the logic address of the file head of the resource file in the second cache space according to the path and the file name of the input resource file; every time the search is traversed in the second cache space, the life cycle values of all the second cache units are reduced by fixed frequency weight values; if the logical address of the file header of the resource file is found in a certain second cache unit, the second cache unit is called a matched second cache unit, and the new life cycle value of the matched second cache unit=the time overhead value of the resource file×the fixed time weight value; if the logical address of the file header of the resource file is not found in all of the second cache units, the processing is continued by the file system lookup unit. The file system searching unit is used for traversing and searching the logical address of the file head of the resource file in the file system according to the path and the file name of the input resource file. The second cache storage unit is configured to store the query result of the file system search unit in a second cache unit with a minimum life cycle value, and specifically includes storing: the path and the file name of the resource file, the logical address of the file header of the resource file, the time overhead value of the resource file, and the initial value of the life cycle value of the second buffer unit.
The method and the device have the technical effects that the second cache space is newly added in the file system, the updating strategy of the second cache unit is provided, the time consumption of query can be reduced through the second cache space when the file system frequently accesses the resource file, and the query efficiency is improved.
Drawings
FIG. 1 is a flow chart of a prior art method for using a cache set in a file system.
Fig. 2 is a flow chart of a method for using a cache setting of a file system according to the present application.
Fig. 3 is a schematic structural diagram of a cache setting usage device of a file system according to the present application.
The reference numerals in the drawings illustrate: a second cache setting unit 21, a second cache searching unit 22, a file system searching unit 23, and a second cache holding unit 24.
Detailed Description
Referring to fig. 2, the method for using the cache setting of the file system according to the present application includes the following steps.
Step S21: and setting a second cache space in the file system, wherein the second cache space comprises a plurality of second cache units. Each second buffer unit records a path and a file name of a resource file, a logic address of a file header of the resource file, a time overhead value of the resource file and a life cycle value of the second buffer unit. Not all resource files have corresponding second cache units, and only a part of the opened resource files have corresponding second cache units.
The time overhead value of the resource file refers to the time taken to find the header of the resource file in the file system, for example in milliseconds. The minimum value of the time overhead value of the resource file is, for example, 1 millisecond.
The life cycle value of the second cache element is used to characterize the data shelf life of the second cache element. The higher the use frequency of a certain second cache unit is, the higher the inquiry time cost is, and the larger the life cycle value of the second cache unit is; and vice versa. When the second buffer space is insufficient, the second buffer unit with the minimum life cycle value is selected to update the recorded content. The greater the life cycle value, the longer the data in the second cache element is relatively saved; and vice versa. Preferably, the life cycle value of the second cache unit has a maximum value and a minimum value, and the life cycle value of the second cache unit may be a positive number, a zero number or a negative number.
Preferably, the capacity of the second buffer space is fixed, and includes a fixed number of second buffer units.
Step S22: and traversing and searching the logic address of the file head of the resource file in the second cache space according to the path and the file name of the input resource file. Every time a lookup is traversed in the second cache space, the lifecycle values of all second cache units are reduced by a fixed frequency weight value. The fixed frequency weight value is, for example, 1, and can be adjusted according to the actual application scenario. However, if the native life cycle value of a certain second cache element-the fixed frequency weight value < the minimum value of the life cycle value-the native life cycle value of that second cache element is kept unchanged.
If the logical address of the file header of the resource file is found in a certain second cache unit, the second cache unit is called a matched second cache unit, and the new life cycle value of the matched second cache unit=the time overhead value of the resource file×the fixed time weight value. The fixed time weight value is, for example, 1, and can be adjusted according to the actual application scene. However, if the time overhead value of the resource file x the fixed time weight value > the maximum value of the lifecycle value, the new lifecycle value of the matched second cache element = the maximum value of the lifecycle value. Next, steps S12 to S14 are performed. At this point, there is no need to query the file system for the logical address of the header of the resource file. When the file system is relatively large, it is significantly beneficial to save search time by the second buffer space.
If the logical address of the header of the resource file is not found in all the second cache units, the process proceeds to step S23.
Step S23: and traversing and searching the logical address of the file header of the resource file in the file system according to the path and the file name of the input resource file. Step S23 is the same as step S11.
Step S24: storing the query result in step S23 in a second buffer unit with the minimum life cycle value, specifically including storing: the path and file name of the resource file, the logical address of the file header of the resource file, the time overhead value of the resource file, and the life cycle value (initial value is 0) of the second buffer unit. Next, steps S12 to S14 are performed. In this step, if there is an unused second cache unit, after step S22, the life cycle value of the second cache unit is relatively smaller than that of any used second cache unit, so that selecting the second cache unit with the smallest life cycle value selects an unused second cache unit, and the query result of step S23 is stored in an unused second cache unit. If all the second cache units are used, selecting the second cache unit with the minimum life cycle value, deleting the original recorded content, and storing the query result in the step S23 instead. If there are a plurality of second buffer units with the minimum life cycle value, when inquiring in sequence, the second buffer unit inquired first is taken to save the inquiring result of step S23.
Various resource files for GUIs in electronic devices can be broadly divided into two categories: the first class is non-regularly updated resource files and the second class is regularly updated resource files. In practical application, the access frequency of the resource files updated in a non-timing way is not high, but the number of the resource files is large; the access frequency of the resource file updated at regular time is high. If the traditional mode of updating the earliest cache unit is adopted to treat the condition of insufficient second cache space, when the number of non-periodically updated resource files exceeds the number of the second cache units, the data of the periodically updated resource files cannot be found in the second cache space and can only be traversed and searched in a file system. The life cycle value is adopted to determine the data storage life of the second cache unit, the updating mode of the second cache space is optimized, and the problem that the second type resource files with higher access frequency are always replaced by the first type resource files with lower access frequency by the corresponding second cache unit is solved.
In step S22, the life cycle value of the second cache unit that is not matched is continuously reduced in each search process, and the new life cycle value of the second cache unit that is not matched=the new life cycle value of the second cache unit-the frequency weight value. And the life cycle value of the second cache unit matched in a certain searching process is reset to be a positive number, and the new life cycle value of the second cache unit matched=the time overhead value of the resource file x the fixed time weight value. This results in a lower life cycle value for the second cache unit in which the resource file is located with a lower frequency of use, and a lower life cycle value for the second cache unit in which the resource file is located with a lower time overhead value for the same frequency of use of the resource file. Therefore, the application combines the use frequency and the time cost value of the resource file to determine the storage life of the resource file in the second cache unit.
Referring to fig. 3, the device for setting and using a file system according to the present application includes a second buffer setting unit 21, a second buffer searching unit 22, a file system searching unit 23, and a second buffer storing unit 24. The apparatus shown in fig. 3 corresponds to the method shown in fig. 2.
The second buffer setting unit 21 is configured to set a second buffer space in the file system, where the second buffer space includes a plurality of second buffer units. Each second buffer unit records a path and a file name of a resource file, a logic address of a file header of the resource file, a time overhead value of the resource file and a life cycle value of the second buffer unit.
The second cache searching unit 22 is configured to traverse and search the logical address of the header of the resource file in the second cache space according to the path and the file name of the input resource file. Every time a lookup is traversed in the second cache space, the lifecycle values of all second cache units are reduced by a fixed frequency weight value. If the logical address of the file header of the resource file is found in a certain second cache unit, the second cache unit is called a matched second cache unit, and the new life cycle value of the matched second cache unit=the time overhead value of the resource file×the fixed time weight value. If the logical address of the header of the resource file is not found in all the second cache units, the processing is continued subsequently by the file system lookup unit 23.
The file system searching unit 23 is configured to traverse and find the logical address of the file header of the resource file in the file system according to the path and the file name of the input resource file.
The second cache storage unit 24 is configured to store the query result of the file system lookup unit 23 in a second cache unit with a minimum life cycle value, and specifically includes storing: the path and file name of the resource file, the logical address of the file header of the resource file, the time overhead value of the resource file, and the life cycle value (initial value is 0) of the second buffer unit.
The application provides a method for setting and using a second cache space in a file system, and simultaneously considers two factors of the use frequency of a second cache unit and the time cost spent on inquiring resource files in the file system to jointly determine the storage time of the files in the second cache unit. The update strategy of the second cache space enables files which are frequently accessed or files which are the same in access frequency but take more time to inquire to be stored in the second cache space for a longer time. When a certain file is repeatedly inquired, the logical address of the file header of the file can be obtained from the second cache space to a greater extent, so that time expenditure is saved, the inquiry is not required to be traversed from the file system, the inquiry efficiency is improved, and the inquiry time is shortened.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for using the cache setting of a file system is characterized by comprising the following steps of;
step S21: setting a second cache space in the file system, wherein the second cache space comprises a plurality of second cache units; each second cache unit records a path and a file name of a resource file, a logic address of a file header of the resource file, a time overhead value of the resource file and a life cycle value of the second cache unit; the time overhead value of the resource file refers to the time spent for finding the file header of the resource file in the file system; the life cycle value of the second cache unit is used for representing the data storage life of the second cache unit; the larger the life cycle value is, the longer the data in the second buffer unit is relatively stored; vice versa;
step S22: traversing and searching a logic address of a file header of the resource file in the second cache space according to the path and the file name of the input resource file; every time the search is traversed in the second cache space, the life cycle values of all the second cache units are reduced by fixed frequency weight values;
if the logical address of the file header of the resource file is found in a certain second cache unit, the second cache unit is called a matched second cache unit, and the new life cycle value of the matched second cache unit=the time overhead value of the resource file×the fixed time weight value;
if the logical address of the header of the resource file is not found in all the second cache units, proceeding to step S23;
step S23: traversing and finding the logic address of the file header of the resource file in the file system according to the path and the file name of the input resource file;
step S24: storing the query result in step S23 in a second buffer unit with the minimum life cycle value, specifically including storing: the path and the file name of the resource file, the logical address of the file header of the resource file, the time overhead value of the resource file, and the initial value of the life cycle value of the second buffer unit.
2. The method according to claim 1, wherein in the step S21, the higher the frequency of use of a certain second cache unit, the greater the inquiry time overhead, the greater the life cycle value of the second cache unit; and vice versa.
3. The method according to claim 1, wherein in the step S21, the capacity of the second buffer space is fixed, and the second buffer unit is included in a fixed number.
4. A method for setting and using a file system buffer according to claim 3, wherein in step S21, when the second buffer space is insufficient, the second buffer unit with the smallest life cycle value is selected to update the record content.
5. The method according to claim 1, wherein in the step S22, if the number of the primary life cycle of a certain second cache unit-the fixed frequency weight number < the minimum number of the life cycle number, the number of the primary life cycle of the second cache unit is kept unchanged.
6. The method according to claim 1, wherein in the step S22, if the time overhead value x the fixed time weight value of the resource file > the maximum value of the life cycle value, the new life cycle value = the maximum value of the life cycle value of the matched second cache unit.
7. The method according to claim 1, wherein in step S22, the life cycle value of the second cache unit which is not always matched is continuously reduced during each search; the life cycle value of the second cache unit matched in a certain searching process is reset to be positive; this results in a lower life cycle value for the second cache unit in which the resource file is located with a lower frequency of use, and a lower life cycle value for the second cache unit in which the resource file is located with a lower time overhead value for the same frequency of use of the resource file.
8. The method according to claim 1, wherein in step S24, if there are unused second cache units, the life cycle value is smaller than that of any used second cache units, so that the second cache unit with the smallest life cycle value is selected, and an unused second cache unit is selected.
9. The method according to claim 1, wherein in step S24, if all the second cache units are used and there are a plurality of second cache units with the smallest life cycle value, the second cache unit with the smallest life cycle value that is queried first is selected.
10. The buffer memory setting and using device of the file system is characterized by comprising a second buffer memory setting unit, a second buffer memory searching unit, a file system searching unit and a second buffer memory storing unit;
the second cache setting unit is used for setting a second cache space in the file system, and the second cache space comprises a plurality of second cache units; each second cache unit records a path and a file name of a resource file, a logic address of a file header of the resource file, a time overhead value of the resource file and a life cycle value of the second cache unit; the time overhead value of the resource file refers to the time spent for finding the file header of the resource file in the file system; the life cycle value of the second cache unit is used for representing the data storage life of the second cache unit; the larger the life cycle value is, the longer the data in the second buffer unit is relatively stored; vice versa;
the second cache searching unit is used for traversing and searching the logic address of the file head of the resource file in the second cache space according to the path and the file name of the input resource file; every time the search is traversed in the second cache space, the life cycle values of all the second cache units are reduced by fixed frequency weight values; if the logical address of the file header of the resource file is found in a certain second cache unit, the second cache unit is called a matched second cache unit, and the new life cycle value of the matched second cache unit=the time overhead value of the resource file×the fixed time weight value; if the logic address of the file head of the resource file is not found in all the second cache units, continuing to process by the file system searching unit;
the file system searching unit is used for traversing and searching the logical address of the file head of the resource file in the file system according to the path and the file name of the input resource file;
the second cache storage unit is configured to store the query result of the file system search unit in a second cache unit with a minimum life cycle value, and specifically includes storing: the path and the file name of the resource file, the logical address of the file header of the resource file, the time overhead value of the resource file, and the initial value of the life cycle value of the second buffer unit.
CN202110676550.5A 2021-06-18 2021-06-18 Method and device for using cache setting of file system Active CN113485971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110676550.5A CN113485971B (en) 2021-06-18 2021-06-18 Method and device for using cache setting of file system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110676550.5A CN113485971B (en) 2021-06-18 2021-06-18 Method and device for using cache setting of file system

Publications (2)

Publication Number Publication Date
CN113485971A CN113485971A (en) 2021-10-08
CN113485971B true CN113485971B (en) 2023-08-01

Family

ID=77934005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110676550.5A Active CN113485971B (en) 2021-06-18 2021-06-18 Method and device for using cache setting of file system

Country Status (1)

Country Link
CN (1) CN113485971B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077199A (en) * 2012-12-26 2013-05-01 北京思特奇信息技术股份有限公司 File resource searching and locating method and device
CN111078585A (en) * 2019-11-29 2020-04-28 智器云南京信息科技有限公司 Memory cache management method, system, storage medium and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083482A1 (en) * 2005-10-08 2007-04-12 Unmesh Rathi Multiple quality of service file system
JP5032210B2 (en) * 2007-06-08 2012-09-26 株式会社日立製作所 Control computer, computer system, and access control method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077199A (en) * 2012-12-26 2013-05-01 北京思特奇信息技术股份有限公司 File resource searching and locating method and device
CN111078585A (en) * 2019-11-29 2020-04-28 智器云南京信息科技有限公司 Memory cache management method, system, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种结构化P2P协议中的缓存计算模型;熊伟;谢冬青;陆绍飞;;小型微型计算机系统(07);全文 *

Also Published As

Publication number Publication date
CN113485971A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
KR102564170B1 (en) Method and device for storing data object, and computer readable storage medium having a computer program using the same
US8225029B2 (en) Data storage processing method, data searching method and devices thereof
TW201832086A (en) Method for accessing metadata in hybrid memory module and hybrid memory module
CN104503703B (en) The treating method and apparatus of caching
JP2018163659A (en) Hardware based map acceleration using reverse cache tables
US11269956B2 (en) Systems and methods of managing an index
CN110555001B (en) Data processing method, device, terminal and medium
WO2013152678A1 (en) Method and device for metadata query
WO2012174906A1 (en) Data storage and search method and apparatus
CN107562367B (en) Method and device for reading and writing data based on software storage system
CN114138193B (en) Data writing method, device and equipment for partition naming space solid state disk
WO2013075306A1 (en) Data access method and device
US20090319721A1 (en) Flash memory apparatus and method for operating the same
CN103455284A (en) Method and device for reading and writing data
CN110795363A (en) Hot page prediction method and page scheduling method for storage medium
CN107133183B (en) Cache data access method and system based on TCMU virtual block device
CN102650972B (en) Date storage method, Apparatus and system
CN111831691A (en) Data reading and writing method and device, electronic equipment and storage medium
CN113485971B (en) Method and device for using cache setting of file system
CN111541617B (en) Data flow table processing method and device for high-speed large-scale concurrent data flow
US20150121033A1 (en) Information processing apparatus and data transfer control method
JPH08137754A (en) Disk cache device
CN108804571B (en) Data storage method, device and equipment
US20110258369A1 (en) Data Writing Method and Data Storage Device
CN104516827B (en) A kind of method and device of read buffer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant