WO2020062986A1 - 文件缓存的回收方法以及装置、存储介质及终端 - Google Patents

文件缓存的回收方法以及装置、存储介质及终端 Download PDF

Info

Publication number
WO2020062986A1
WO2020062986A1 PCT/CN2019/093720 CN2019093720W WO2020062986A1 WO 2020062986 A1 WO2020062986 A1 WO 2020062986A1 CN 2019093720 W CN2019093720 W CN 2019093720W WO 2020062986 A1 WO2020062986 A1 WO 2020062986A1
Authority
WO
WIPO (PCT)
Prior art keywords
priority
file cache
lru
lru list
list
Prior art date
Application number
PCT/CN2019/093720
Other languages
English (en)
French (fr)
Inventor
周明君
方攀
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020062986A1 publication Critical patent/WO2020062986A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list

Definitions

  • the embodiments of the present application relate to the technical field of terminals, for example, a method and device for recovering a file cache, a storage medium, and a terminal.
  • file caching page caching
  • Caching file contents in memory can improve file access efficiency.
  • the embodiments of the present application provide a method, a device, a storage medium, and a terminal for recovering a file cache, which can optimize the recovery scheme of the file cache and improve the accuracy of recovery.
  • an embodiment of the present application provides a method for recovering a file cache, including:
  • the scanning speed corresponding to each LRU list is determined according to the priority of each LRU list, and the LRU list is scanned synchronously by using the scanning speed to recover the scanned file cache.
  • an embodiment of the present application further provides a recycling device for file cache.
  • the recycling device includes:
  • An event detection module configured to detect that a file cache collection event is triggered
  • the list obtaining module is configured to obtain a list of least recently used LRUs with different priorities, wherein the priority of the LRU list is determined according to a situation in which a file cache is accessed and a priority of an application accessing the file cache;
  • the list scanning module is configured to determine a scanning speed corresponding to each LRU list according to the priority of each LRU list, and synchronously scan the LRU list by using the scanning speed to recover the scanned file cache.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the method for recovering a file cache according to the embodiment of the present application is implemented.
  • an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable by the processor.
  • the processor executes the computer program, the implementation is implemented as in the embodiment of the present application The method for recovering a file cache.
  • FIG. 1 is a flowchart of a method for recovering a file cache according to an embodiment of the present application
  • FIG. 2 is a flowchart of another method for recovering a file cache according to an embodiment of the present application
  • FIG. 3 is a flowchart of another method for recovering a file cache according to an embodiment of the present application.
  • FIG. 4 is a structural block diagram of a file cache recovery device according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 6 is a structural block diagram of a smart phone according to an embodiment of the present application.
  • FIG. 1 is a flowchart of a method for recovering a file cache provided by an embodiment of the present application.
  • the method may be executed by a device for recovering a file cache.
  • the device may be implemented by software and / or hardware, and may generally be integrated in a terminal. .
  • the method includes: step 110, step 120, and step 130.
  • step 110 it is detected that a file cache reclamation event is triggered.
  • the terminal in the embodiment of the present application may include a device equipped with an operating system, such as a mobile phone, a tablet computer, a notebook computer, a computer, and a smart home appliance.
  • an operating system such as a mobile phone, a tablet computer, a notebook computer, a computer, and a smart home appliance.
  • the type of the operating system is not limited in the embodiments of the present application, and may include, for example, an Android operating system, a Symbian operating system, a Windows operating system, and an Apple operating system.
  • the embodiments of this application will take the Android operating system as an example for subsequent description.
  • the terminal's Android operating system access to various files is very frequent.
  • file caching is a very important role. By caching file contents in memory, you can improve file access efficiency. However, due to the limited memory capacity, it is much smaller than the capacity of external storage. When there is insufficient memory, caching files in memory may need to delete other cached files first.
  • the file caching process is a process of continuously deleting and adding data. Therefore, how to more accurately delete unnecessary file caches in order to provide space for data to be added to the file cache is an important issue for file cache optimization.
  • the operating system uses the file cache as follows:
  • the free memory is used as a file cache, and the state of this part of the file cache (that is, the memory space) is changed from the free state to the available state. If the file cache is available, it can be recycled when the file cache needs to be recycled.
  • the memory is divided into several allocation units with a size of several bytes. Whether each allocation unit is idle is described by using a bitmap. If it is allocated, the corresponding bit is set to 1, unallocated and set to 0. The allocation unit that is 0 is called free memory.
  • All file caches are managed through the Least Recently Used (LRU) list, and file caches that have not been accessed again for the longest period of time are prioritized.
  • LRU Least Recently Used
  • the system maintains two LRU lists, which are used to store active file caches and inactive file caches, respectively, and the reclamation operation always starts from the head of the inactive file caches. Among them, whether the file cache is active can be determined according to the frequency of the file cache being accessed.
  • the file cache recovery scheme in the related technology takes the access time of the file cache as the sole indicator of whether to recycle. In actual applications, it may occur that malicious files often access certain file caches, making these file caches the most recent. File caches that are frequently accessed and occupy file caches for a long time affect the addition of new file caches, that is, the file cache recovery scheme in the related art has the defect that the file cache recovery is not accurate enough.
  • the file cache can be recycled based on the access situation of the file cache and the priority information of the application process accessing the file cache, so as to achieve more precise file cache recovery control.
  • triggering the file cache collection event there are many conditions for triggering the file cache collection event, which are not specifically limited in the embodiments of the present application. For example, when it is detected that the free memory is less than a preset threshold, it is determined that the current free memory is insufficient, and a file cache reclamation event is triggered. For another example, when it is detected that the storage space required by the file to be added to the memory space is greater than a set threshold, a file cache recycling event is triggered. For another example, you can periodically trigger file cache collection events and so on.
  • step 120 a list of least recently used LRUs with different priorities is obtained.
  • the priority of the LRU list is determined according to the situation in which the file cache is accessed and the priority of the application accessing the file cache.
  • the situation of being visited includes the frequency of being visited or the time of being visited. For example, a file cache with a high access frequency has a higher priority, or a file cache with a accessed time that belongs to a time that a user cares about has a higher priority.
  • N LRU lists are set for storing file caches of different priorities.
  • N is a positive integer. If the value of N is too small, the accuracy of the file cache recovery accuracy of the present application will not be improved. If the value of N is too large, the jump between the LRU lists will increase the system. Overhead. Therefore, the value of N can be determined according to the test result.
  • Each file cache can be assigned a priority based on how often these file caches are accessed and the priority of the application process accessing the file cache, and file caches with the same priority are stored in the same LRU list.
  • the priority of the file cache can be used as the priority of the LRU list. For example, suppose that the value of N is 3, and further, the file cache in the inactive LRU list is divided into 3 categories according to its active degree (such as the frequency of being accessed, and the priority of the file cache can be determined based on the frequency of being accessed). It is stored in three LRU lists, which are recorded as LRU [0], LRU [1], and LRU [2]. Each LRU list represents a priority cache. According to the activity of the file cache, the priorities of the different types of file caches are recorded as the first priority (highest priority), the second priority, and the third priority (lowest priority), and the first priority is higher than Second priority. The second priority is higher than the third priority.
  • the files with the same priority may be stored discontinuously in the LRU list. For example, for the lowest priority LRU list, after writing the file cache to two consecutive allocation units, empty one allocation unit, fill two consecutive allocation units again, empty two allocation units, and then fill two consecutive allocation units. Allocation unit, empty another allocation unit, fill two consecutive allocation units again, empty two allocation units, and then fill two consecutive allocation units, and so on, and add the file cache to the lowest priority according to the above storage rules LRU list.
  • the LRU list one level higher than the lowest priority, write the data in the file cache in one allocation unit after emptying two consecutive allocation units, empty the two consecutive allocation units again, and then store the data in the file cache into The next next allocation unit is empty three consecutive allocation units, and then the data of the file cache is stored in the next next allocation unit, and so on.
  • the file cache is added to the higher-level LRU list according to the above storage rule.
  • the data in the file cache is written in one allocation unit after six consecutive allocation units are empty, and the six consecutive allocation units are again empty, but the data in the file cache is stored in The next next allocation unit, and so on, adds the file cache to the upper two-level LRU list according to the above storage rules.
  • Table 1 List of LRUs for inactive file caches.
  • the storage rules for adding the file cache to the LRU list are not limited to the methods listed in the above examples. Others can ensure that the head of the list is aligned. In the table formed by each LRU list, there is only one LRU in the same column. A rule in which a file cache is added to a list allocation unit also belongs to a storage rule described in this application.
  • a plurality of least recently used LRU lists with different priorities are obtained from the memory.
  • step 130 the scanning speed corresponding to each LRU list is determined according to the priority of each LRU list, and the LRU list is scanned synchronously by using the scanning speed to recover the scanned file cache.
  • the scanning speed can be sorted to obtain a scanning speed sequence.
  • the LRU list can also be sorted according to the priority of the LRU list to obtain the LRU list sequence, and further, the mapping relationship between the LRU list sequence and the scanning speed sequence is determined. For example, the LRU list with the highest priority corresponds to the lowest speed value in the scan speed sequence, and the LRU list with the lowest priority corresponds to the highest speed value in the scan speed sequence, that is, the LRU list sequence is established in such a way that the higher the priority, the lower the scan speed. Mapping relationship with scan speed sequence.
  • the meaning of synchronously scanning the LRU list starts from the head of each LRU list, and simultaneously scans each LRU list at different scanning speeds, until the end of each LRU list is scanned, and the scanning ends.
  • the LRU list is scanned synchronously using the scanning speed corresponding to each LRU list to recover the scanned file cache. For example, assuming that the scanning speeds are V 0 , V 1, and V 2 , and V 0 > V 1 > V 2 exists, the LRU [0] list with the highest priority corresponds to the lowest scanning speed V 2 , with the second highest priority. The LRU [1] list corresponds to the scanning speed V 1 , and the LRU [2] list with the lowest priority corresponds to the highest scanning speed V 0 .
  • each pair of scanning speed V 2 LRU [0] to scan the list scan speed V 1 in [1] LRU list is scanned, the scanning speed V 0 and the pair of the LRU list [2] scanning.
  • Perform a reclamation operation on the scanned file cache for each LRU list, the number of the cached files is counted from 0.
  • Table 2 is a relationship table between the number of scans (or scan time) and the LRU list.
  • Table 2 The relationship between the number of scans and the LRU list.
  • [0], [1], [2], [3], [4], [5], [6], [7], ... represent the cache sequence number of the file cache being recovered.
  • the file cache is not continuously stored in the LRU list, and the scanned file cache is reclaimed, and only the file cache in the LRU [2] list with fast scanning speed does not occur.
  • the low-priority LRU list is always traversed first than the high-priority LRU list. After the traversal of the lowest priority LRU list is completed, the LRU list higher than the lower level is automatically downgraded to the lowest priority LRU list.
  • a list of the least recently used LRUs with different priorities is obtained, wherein the priority of the LRU list is based on the conditions and access of the file cache.
  • the priority of the application of the file cache is determined; the scanning speed corresponding to each LRU list is determined according to the priority of each LRU list, and the LRU list is scanned synchronously by using the scanning speed to recover the scanned file cache.
  • the priority of the LRU list can be determined according to the access situation of the file cache and the priority of the application accessing the file cache, and based on the priority, the scan speed of the scan operation performed on the LRU list is determined, and the scan is adopted. Scanning each LRU list synchronously can avoid the situation that the file cache in the high-priority list will never be recycled, and achieve a more precise file cache collection control method, which improves the collection accuracy.
  • the method before detecting that the file cache reclamation event is triggered, the method further includes: obtaining a visited frequency of the file cache; determining a target file cache with the accessed frequency less than a preset threshold; and according to the target file cache, The frequency of being accessed sets the priority for each target file cache; for the target file caches with the same priority, they are stored in the same LRU list according to a preset storage rule, and the priority of the file cache in the LRU list is used as the corresponding LRU The priority of the list.
  • This additional technical solution can be regarded as the initialization step of the file cache reclamation function, that is, counting the frequency of file cache access in the system, classifying the file cache based on the frequency, and making the file cache with the frequency higher than a preset threshold as active File cache, the file cache whose frequency is lower than the preset threshold is regarded as the inactive file cache, and is marked as the target file cache.
  • Target file caches with the same priority are added to the same LRU list, and the priority of the file cache is used as the priority of the corresponding LRU list.
  • the advantage of this setting is that inactive file caches are added to different LRU lists in advance, so that the LRU list can be obtained directly when performing a synchronous scan, and there is no need to generate an LRU list every time a file cache reclamation event is detected, improving Recycling efficiency of the file cache.
  • FIG. 2 is a flowchart of another method for recovering a file cache according to an embodiment of the present application. As shown in FIG. 2, the method includes steps 201 to 209.
  • step 201 it is detected that the priority setting event is triggered.
  • a user is provided with a priority setting function, and an option for setting how many priorities is set and a threshold setting option are displayed in the priority setting interface.
  • the user can input and set N priorities in the priority setting interface according to his own needs, and enter N-1 thresholds to allocate the priority of the file cache based on each threshold. For example, if the user inputs 4 priority levels Y 1 , Y 2 , Y 3 and Y 4 , the priority order is Y 1 > Y 2 > Y 3 > Y 4 , and enter 3 threshold values N 1 , N 2 and N 3 , When the user inputs the above information and clicks OK, the priority setting event is triggered. It can be understood that the manner in which the user inputs the priority setting information and the threshold setting information is not limited to manual input, and may also be voice input.
  • the priority setting event can be triggered periodically.
  • step 202 the accessed frequency of the file cache is obtained.
  • the purpose of file access tracking can generally be achieved by tracking kernel information for file access.
  • a Linux kernel-level tracing framework like ftrace is used to track file access information.
  • the preset program code is inserted before a function to be called corresponding to a file access event, and the file access information corresponding to the function to be called is obtained through the preset program code.
  • the preset storage format corresponding to the virtual machine stores file access information.
  • the access frequency of the file cache is determined based on the file access information.
  • step 203 it is determined that the target file cache whose access frequency is less than a preset threshold, and a priority is set for each target file cache according to the access frequency of the target file cache.
  • the user inputs and sets 4 priorities Y1, Y2, Y3, and Y4, and the priority order is Y1> Y2> Y3> Y4, and enters 3 thresholds N1, N2, and N3, which will be accessed frequently.
  • the priority of file caches less than N1 is set to Y4, the priority of file caches accessed more than or equal to N1 but less than N2 is set to Y3, and the priority of file caches accessed more than or equal to N2 but less than N3 is set to Y4 Set it to Y2, and set the priority of the file cache that is accessed more than N3 to Y1.
  • step 204 the target file caches with the same priority are stored in the same LRU list according to a preset storage rule, and the priority of the file cache in the LRU list is used as the priority of the corresponding LRU list.
  • file caches with the same priority are added to the same LRU list. It should be noted that in the same LRU list, the file cache is stored according to preset storage rules. After the heads of the lists are aligned, in the table formed by each LRU list, there is only one allocation unit of the LRU list in the same column. Added file cache.
  • step 205 it is detected that the file cache reclamation event is triggered.
  • step 206 a list of least recently used LRUs with different priorities is obtained.
  • step 207 the LRU list is scanned synchronously by using a preset scanning speed.
  • the scanning speed of the higher priority LRU list is slower than the scanning speed of the lower priority LRU list, thereby ensuring the lower priority LRU list and priority.
  • File caches in higher LRU lists will be reclaimed to avoid the problem that file caches in higher priority LRU lists will not be reclaimed.
  • different scanning speeds are used to simultaneously scan lower priority LRU lists (Faster scanning speed) and higher priority LRU list (slower scanning speed), to ensure that the number of recycled file caches in the lower priority LRU list is greater than the higher priority LRU list, achieving the least Active file caches can be reclaimed more often.
  • the scanning speed of the LRU list with higher priority may be set to half the scanning speed of the LRU list with lower priority.
  • step 208 it is determined whether the file cache is scanned. If yes, step 209 is performed; otherwise, step 207 is performed.
  • step 209 the scanned file cache is recovered.
  • an inactive file cache is determined based on the accessed frequency of the file cache, and an inactive file cache is determined.
  • assign priorities based on the frequency of their access to dynamically update the priority of the file cache, thereby updating the LRU list, which can avoid generating an LRU list every time a file cache reclamation event is detected, and can set information and
  • the priority setting information dynamically adjusts the number of the LRU list and the stored file cache, so that the recycling of the file cache is more in line with the user's usage habits.
  • FIG. 3 is a flowchart of another method for recovering a file cache according to an embodiment of the present application. As shown in FIG. 3, the method includes steps 301 to 311.
  • step 301 it is detected that the priority setting event is triggered.
  • step 302 the accessed frequency of the file cache is obtained.
  • step 303 it is determined that the target file cache whose access frequency is less than a preset threshold, and a priority is set for each target file cache according to the access frequency of the target file cache.
  • step 304 the target file caches with the same priority are stored in the same LRU list according to a preset storage rule, and the priority of the file cache in the LRU list is used as the priority of the corresponding LRU list.
  • step 305 when an access event of the file cache is detected, the priority of the file cache is adjusted according to the priority of an application accessing the file cache.
  • the kernel is generally implemented based on Linux, and the bottom layer of the system is generally Linux Kernel.
  • the system divides the kernel space and user space. Different operating systems may have different division methods or division results.
  • User space generally refers to the memory area where the user process is located.
  • Application programs run in user space, and user process data is stored in user space.
  • Kernel space is the memory area occupied by the operating system.
  • the operating system and drivers run in kernel space.
  • the data is stored in the system space. In this way, user data and system data can be isolated to ensure system stability.
  • user space and kernel space interact through system calls.
  • System calls can be understood as a set of all system calls provided by the operating system implementation, that is, a program interface or application programming interface (Application Programming Interface, API). ) Is the interface between the application and the system.
  • the main function of the operating system is to manage hardware resources and provide a good environment for application developers to make applications more compatible.
  • the kernel provides a series of multi-kernel functions with predetermined functions. Groups of interfaces called system calls are presented to the user.
  • the system call passes the application's request to the kernel, calls the corresponding kernel function to complete the required processing, and returns the processing result to the application.
  • the kernel space needs to be accessed through a system call, that is, the corresponding system call interface needs to be called to access the kernel space. Therefore, the corresponding file access event can be performed according to a preset file access event. Whether the system call interface is called to determine whether the preset file cache access event is triggered. If it is called, it can be considered that the preset file cache access event is triggered.
  • the priority of the target file cache that is less than a preset threshold is set as the first priority, and the first priority of the file cache corresponding to the file access event can be easily obtained. level.
  • it assigns different priorities based on important applications (which can be the system default or user-set).
  • the priority of the application can be set by the user or the system default setting. Gets the second priority of the application accessing the file cache. Compare first priority and second priority. If the second priority is higher than the first priority, the second priority is used to replace the first priority as the priority of the file cache; if the second priority is lower than the first priority, the priority of the file cache is maintained The level is unchanged.
  • the target priorities of the at least two applications are obtained.
  • the target priority is the priority of each application accessing the file cache. Compare the first priority with the target priority, determine the highest priority among them, and use the highest priority as the priority of the file cache.
  • step 306 the LRU list is updated based on the priority-adjusted file cache.
  • the priority of the file cache changes, it needs to be adjusted to the LRU list corresponding to the priority, that is, the LRU list needs to be updated based on the priority-adjusted file cache.
  • step 307 it is detected that the file cache reclamation event is triggered.
  • step 308 a list of least recently used LRUs with different priorities is obtained.
  • step 309 the LRU list is scanned synchronously by using a preset scanning speed.
  • step 310 it is determined whether the file cache is scanned. If yes, step 311 is performed, otherwise, step 309 is performed.
  • step 311 the scanned file cache is recovered.
  • the priority of the file cache is adjusted according to the priority of an application accessing the file cache, and the priority of the file cache is established and the file cache is used.
  • the association relationship of the application process avoids the problem of inaccurate collection that may be caused by the collection control based on the most recently accessed file, and achieves more accurate file cache collection control.
  • FIG. 4 is a structural block diagram of a file cache recovery device provided by an embodiment of the present application.
  • the device may be implemented by software and / or hardware, and is generally integrated in a terminal.
  • the file cache can be accurately performed by executing a file cache recovery method Recycle.
  • the device includes an event detection module 410, a list acquisition module 420, and a list scanning module 430.
  • the event detection module 410 is configured to detect that a file cache collection event is triggered.
  • the list obtaining module 420 is configured to obtain a list of least recently used LRUs with different priorities, wherein the priority of the LRU list is determined according to a situation in which a file cache is accessed and a priority of an application accessing the file cache.
  • the list scanning module 430 is configured to determine a scanning speed corresponding to each LRU list according to the priority of each LRU list, and synchronously scan the LRU list by using the scanning speed to recover the scanned file cache.
  • the file cache recovery device obtains a list of least recently used LRUs with different priorities if it detects that a file cache recovery event is triggered, wherein the priority of the LRU list is based on the file cache being recovered.
  • the access conditions and the priority of the application accessing the file cache are determined; the scanning speed corresponding to each LRU list is determined according to the priority of each LRU list, and the LRU list is scanned synchronously by using the scanning speed, and the scanned File cache is recycled.
  • the priority of the LRU list can be determined according to the access situation of the file cache and the priority of the application accessing the file cache, and based on the priority, the scan speed of the scan operation performed on the LRU list is determined, and the scan is adopted. Scanning the LRU list synchronously can avoid the situation that the file cache in the high-priority list will never be recycled, and implement a more precise file cache collection control method, which improves the collection accuracy rate.
  • the method further includes: an LRU list generating module.
  • the LRU list generation module is configured to obtain the accessed frequency of the file cache before detecting that the file cache reclamation event is triggered; determine the target file cache whose access frequency is less than a preset threshold, and according to the accessed target file cache Set the priority for each target file cache frequently; for the target file caches with the same priority, store them in the same LRU list according to preset storage rules, and use the priority of the file cache in the LRU list as the corresponding LRU list. priority.
  • the method further includes a priority adjustment module and an LRU list update module.
  • a priority adjustment module configured to adjust a priority of the file cache according to a priority of an application accessing the file cache when an access event of the file cache is detected
  • the LRU list update module is configured to update the LRU list based on the priority-adjusted file cache.
  • the priority adjustment module is further configured to:
  • the second priority is used as a priority of the file cache.
  • the priority adjustment module is further configured to:
  • the list scanning module 430 is configured to:
  • the corresponding scanning speed is matched according to the priority of each LRU list, where the scanning speed of the higher priority LRU list is lower than the scanning speed of the lower priority LRU list for the two LRU lists with adjacent priorities;
  • the scanning speed of the higher priority LRU list is half the scanning speed of the lower priority LRU list.
  • the case where the file cache is accessed includes: the frequency at which the file cache is accessed or the time at which the file cache is accessed.
  • the list scanning module is further configured to start from the heads of multiple LRU lists and scan the multiple LRU lists at different scanning speeds at the same time until the multiple LRUs are scanned. The end of the list.
  • An embodiment of the present application further provides a storage medium containing computer-executable instructions.
  • the computer-executable instructions when executed by a computer processor, perform a method for reclaiming a file cache.
  • the method includes:
  • the scanning speed corresponding to each LRU list is determined according to the priority of each LRU list, and the LRU list is scanned synchronously by using the scanning speed to recover the scanned file cache.
  • Storage medium any type of memory device or storage device.
  • the term "storage medium” is intended to include: installation media, such as Compact Disc-Read-Only Memory (CD-ROM), floppy disks or magnetic tape devices; computer system memory or random access memory, such as dynamic random access memory ( Dynamic Random Access Memory (DRAM), Double-rate synchronous dynamic random access memory (Double Date synchronous synchronous Dynamic Random Access Memory (DDR RAM), Static Random Access Memory (Static Random Access Memory, SRAM), extended data output random access Memory (Extended Data Output Random Access Memory, EDO RAM), Rambus RAM, etc .; non-volatile memory, such as flash memory, magnetic media (such as hard disk or optical storage); registers or other similar types of memory elements, etc. .
  • installation media such as Compact Disc-Read-Only Memory (CD-ROM), floppy disks or magnetic tape devices
  • computer system memory or random access memory such as dynamic random access memory ( Dynamic Random Access Memory (DRAM), Double-rate synchronous dynamic random access memory (Double
  • the storage medium may further include other types of memory or a combination thereof.
  • the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network such as the Internet.
  • the second computer system may provide program instructions to the first computer for execution.
  • the term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems connected through a network.
  • the storage medium may store program instructions (for example, embodied as a computer program) executable by one or more processors.
  • the storage medium provided by the embodiments of the present application includes computer-executable instructions, and the computer-executable instructions are not limited to the reclamation operation of the file cache described above, and may also execute the file cache provided by any embodiment of the present application Relevant operations in the recycling method.
  • FIG. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the terminal includes a memory 510 and a processor 520.
  • the memory 510 is configured to store a computer program, an LRU list, and the like; the processor 520 reads and executes the computer program stored in the memory 510.
  • the processor 520 implements the following steps when executing the computer program: detecting that a file cache reclamation event is triggered; obtaining a list of least recently used LRUs with different priorities, wherein the priority of the LRU list is based on the file cache The conditions of being accessed and the priority of the application accessing the file cache are determined; the scanning speed corresponding to each LRU list is determined according to the priority of each LRU list, and the LRU list is scanned synchronously by using the scanning speed.
  • File cache for recycling detecting that a file cache reclamation event is triggered; obtaining a list of least recently used LRUs with different priorities, wherein the priority of the LRU list is based on the file cache The conditions of being accessed and the priority of the application accessing the file cache are determined; the scanning speed corresponding to each LRU list is determined according to the priority of each LRU list, and the LRU list is scanned synchronously by using the scanning speed.
  • File cache for recycling detecting that a file cache reclamation event is triggered; obtaining
  • FIG. 6 is a structural block diagram of a smart phone according to an embodiment of the present application.
  • the smart phone may include: a memory 601, a central processing unit (CPU) 602 (also referred to as a processor, hereinafter referred to as a CPU), a peripheral interface 603, and a radio frequency (RF) circuit.
  • CPU central processing unit
  • RF radio frequency
  • the illustrated smartphone 600 is only an example of a terminal, and the smartphone 600 may have more or fewer components than those shown in the figure, two or more components may be combined, or Can have different component configurations.
  • the various components shown in the figures can be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and / or application specific integrated circuits.
  • Memory 601 which can be accessed by CPU602, peripheral interface 603, etc.
  • the memory 601 can include high-speed random access memory, and can also include non-volatile memory, such as one or more disk storage devices, flash memory devices , Or other volatile solid-state storage devices.
  • a computer program is stored in the memory 601, and a preset file and a preset white list can also be stored.
  • Peripheral interface 603, which can connect the input and output peripherals of the device to the CPU 602 and the memory 601.
  • the I / O subsystem 609 which can connect input / output peripherals on the device, such as touch screen 612 and other input / control devices 610, to peripheral interface 603.
  • the I / O subsystem 609 may include a display controller 6091 and one or more input controllers 6092 for controlling other input / control devices 610.
  • one or more input controllers 6092 receive electrical signals from or send electrical signals to other input / control devices 610.
  • Other input / control devices 610 may include physical buttons (press buttons, rocker buttons, etc.) ), Dial, slide switch, joystick, click wheel.
  • the input controller 6092 can be connected to any of the following: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
  • a touch screen 612 which is an input interface and an output interface between a user terminal and a user, and displays a visual output to the user.
  • the visual output may include graphics, text, icons, videos, and the like.
  • the display controller 6091 in the I / O subsystem 609 receives electrical signals from the touch screen 612 or sends electrical signals to the touch screen 612.
  • the touch screen 612 detects a contact on the touch screen, and the display controller 6091 converts the detected contact into interaction with a user interface object displayed on the touch screen 612, that is, realizes human-computer interaction.
  • the user interface object displayed on the touch screen 612 may be an operation Icons for games, icons connected to the appropriate network, etc.
  • the device may also include a light mouse, which is a touch-sensitive surface that does not display visual output, or an extension of the touch-sensitive surface formed by a touch screen.
  • the RF circuit 605 is mainly configured to establish communication between a mobile phone and a wireless network (that is, a network side), and realize data reception and transmission of the mobile phone and the wireless network. For example, send and receive text messages, e-mail, and so on. Specifically, the RF circuit 605 receives and sends an RF signal.
  • the RF signal is also referred to as an electromagnetic signal.
  • the RF circuit 605 converts an electrical signal into an electromagnetic signal or converts an electromagnetic signal into an electrical signal, and communicates with the communication network and other devices through the electromagnetic signal. For communication.
  • RF circuit 605 may include known circuits for performing these functions, including but not limited to antenna systems, RF transceivers, one or more amplifiers, tuners, one or more oscillators, digital signal processors, CODEC ( COder-DECoder (codec) chipset, Subscriber Identity Module (SIM), and so on.
  • CODEC COder-DECoder (codec) chipset
  • SIM Subscriber Identity Module
  • the audio circuit 606 is mainly configured to receive audio data from the peripheral interface 603, convert the audio data into electrical signals, and send the electrical signals to the speaker 611.
  • the speaker 611 is configured to restore a voice signal received by the mobile phone from the wireless network through the RF circuit 605 to a sound and play the sound to a user.
  • the power management chip 608 is configured to provide power and power management for the hardware connected to the CPU 602, the I / O subsystem, and peripheral interfaces.
  • the terminal provided in the embodiment of the present application may determine the priority of the LRU list according to the access situation of the file cache and the priority of the application accessing the file cache, and determine the scanning speed of performing the scanning operation on the LRU list based on the priority.
  • the scanning speed synchronizes the scanning of the LRU list, which can avoid the situation that the file cache in the high-priority list will never be recycled, realize a more precise file cache collection control method, and improve the collection accuracy rate.
  • the file cache recovery device, storage medium, and terminal provided in the foregoing embodiments can execute the file cache recovery method provided by any embodiment of the present application, and have corresponding function modules and beneficial effects for executing the method.
  • a method for recovering a file cache provided in any embodiment of the present application.

Abstract

公开了一种文件缓存的回收方法、装置、存储介质及终端。该方法包括检测到文件缓存回收事件被触发;获取具有不同优先级的最近最少使用LRU列表;根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。

Description

文件缓存的回收方法以及装置、存储介质及终端
本申请要求在2018年09月26日提交中国专利局、申请号为201811125065.3的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及终端技术领域,例如涉及一种文件缓存的回收方法以及装置、存储介质及终端。
背景技术
目前,很多数据均是以文件的形式进行存储的。在终端的操作系统中,文件缓存(page cache)是一个非常重要的角色,将文件内容缓存在内存中可以提高文件的访问效率。
然而,由于内存容量远远小于外部存储器的容量,随着终端使用时间的增加,文件缓存需要不断地删除和添加。因此,更加准确地删除不需要的文件缓存,以便为待添加的文件缓存提供空间是文件缓存优化的重要课题。然而,相关技术中的文件缓存的回收方案仍不够完善,在实际应用中并不能精确的回收文件缓存。
发明内容
本申请实施例提供一种文件缓存的回收方法、装置、存储介质及终端,可以优化文件缓存的回收方案,提高回收准确率。
第一方面,本申请实施例提供了一种文件缓存的回收方法,包括:
检测到文件缓存回收事件被触发;
获取具有不同优先级的最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;
根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
第二方面,本申请实施例还提供了一种文件缓存的回收装置,该回收装置包括:
事件检测模块,设置为检测到文件缓存回收事件被触发;
列表获取模块,设置为获取具有不同优先级的最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;
列表扫描模块,设置为根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
第三方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请实施例所述的文件缓存的回收方法。
第四方面,本申请实施例提供了一种终端,包括存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如本申请实施例所述的文件缓存的回收方法。
附图概述
图1是本申请实施例提供的一种文件缓存的回收方法的流程图;
图2是本申请实施例提供的另一种文件缓存的回收方法的流程图;
图3是本申请实施例提供的又一种文件缓存的回收方法的流程图;
图4是本申请实施例提供的一种文件缓存的回收装置的结构框图;
图5是本申请实施例提供的一种终端的结构示意图;
图6是本申请实施例提供的一种智能手机的结构框图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各步骤描述成顺序的处理,但是其中的许多步骤可以被并行地、并发地或者同时实施。此外,各步骤的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、 子程序等等。
图1是本申请实施例提供的一种文件缓存的回收方法的流程图,该方法可以由文件缓存的回收装置来执行,其中,该装置可由软件和/或硬件实现,一般可集成在终端中。如图1所示,该方法包括:步骤110、步骤120和步骤130。
在步骤110中,检测到文件缓存回收事件被触发。
在一实施例中在一实施例中,本申请实施例中的终端可包括手机、平板电脑、笔记本电脑、计算机以及智能家电等设置安装有操作系统的设备。
本申请实施例中对操作系统的类型不做限定,例如可包括安卓(Android)操作系统、塞班(Symbian)操作系统、视窗化(Windows)操作系统以及苹果(ios)操作系统等等。为了便于说明,本申请实施例将以安卓操作系统为例进行后续的说明。在终端的Android操作系统中,对各类文件的访问非常频繁。而在Android的文件访问中,文件缓存是一个非常重要的角色,通过将文件内容缓存在内存中,可以提高文件访问效率。但是,由于内存容量有限,其远远小于外部存储器的容量,在内存不足时,将文件缓存于内存中可能需要先删除已缓存的其它文件。也就是说文件缓存过程就是不断删除和添加数据的过程。因此,如何更加准确地删除不需要的文件缓存,以便为待添加至文件缓存的数据提供空间是文件缓存优化的重要课题。目前,操作系统对文件缓存的使用方法通常为:
1、当检测到文件访问事件且有空闲内存时,将该空闲内存用作文件缓存之用,并将这部分文件缓存(即内存空间)的状态由free状态修改为available状态。若文件缓存为available状态,则在需要回收该文件缓存时,即可对其进行回收。其中,内存被划分为若干个几字节大小的分配单元,每个分配单元是否是空闲的情况采用位图来进行描述,如果已分配,相应位置1,未分配,置0。将为0的分配单元称为空闲内存。
2、当空闲内存不足时,回收一部分文件缓存,优先进行快速回收(如先对没有修改、不需要写回到外部存储器中的文件缓存进行回收),再进行慢速回收(如对有修改,需要写回到外部存储器中的文件缓存进行回收)。
3、所有文件缓存通过最近最少使用(Least Recently Used,LRU)列表管理,优先回收最长时间没有再次访问的文件缓存。系统维持2个LRU列表,分别用于存储活跃的文件缓存和不活跃的文件缓存,而回收操作总是从不活跃的文件缓存的队头开始。其中,可以根据文件缓存被访问的频次判断文件缓存是否活跃。
由此可知,相关技术中的文件缓存的回收方案以文件缓存的访问时间作为是否回收的唯一指标,在实际应用中,可能出现因恶意软件常常访问某些文件 缓存,而使这些文件缓存成为最近常访问的文件缓存而长时间占用文件缓存,影响新的文件缓存的加入,即相关技术中的文件缓存的回收方案存在文件缓存回收不够精确的缺陷。
本申请实施例中,可以基于文件缓存被访问的情况以及访问该文件缓存的应用进程的优先级信息进行文件缓存的回收,从而实现更加精准的文件缓存回收控制。
需要说明的是,触发文件缓存回收事件的条件有很多种,本申请实施例并不作具体限定。例如,当检测到空闲内存小于预设阈值时,确定当前的空闲内存不足,触发文件缓存回收事件。又如,当检测到待添加至内存空间的文件需要占用的存储空间大于设定阈值时,触发文件缓存回收事件。再如,可以定期地触发文件缓存回收事件等等。
在步骤120中,获取具有不同优先级的最近最少使用LRU列表。
需要说明的是,根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定LRU列表的优先级。其中,被访问的情况包括被访问频次或被访问时间等。如被访问频次高的文件缓存的优先级较高,或者,被访问时间属于用户关心的时间的文件缓存的优先级较高等。
在一实施例中在一实施例中,对于不活跃的LRU列表中的文件缓存,设置N个LRU列表用于存储不同优先级的文件缓存。其中,N为正整数,若N取值偏小,则导致本申请的文件缓存回收准确率的提升效果不佳,若N取值偏大,则各个LRU列表之间的跳转会增大系统开销。因此,可以根据测试结果确定N的取值。可以基于这些文件缓存被访问的频次以及访问文件缓存的应用进程的优先级为各个文件缓存分配优先级,将优先级相同的文件缓存存入相同的LRU列表。由于一个LRU列表中存储的文件缓存的优先级相同,则可以将文件缓存的优先级作为该LRU列表的优先级。例如,假设N的取值是3,进而,将不活跃的LRU列表中的文件缓存根据其活跃程度(如被访问频次,基于被访问频次可以确定文件缓存的优先级)分为3类,分别存入3个LRU列表中,记为LRU[0]、LRU[1]和LRU[2],每个LRU列表代表一个优先级的缓存。根据文件缓存的活跃程度将不同类别的文件缓存的优先级分别记为第一优先级(最高优先级)、第二优先级和第三优先级(最低优先级),且第一优先级高于第二优先级,第二优先级高于第三优先级。
在一实施例中,对于优先级相同的文件缓存在LRU列表中可以不连续存储。如,对于最低优先级的LRU列表,文件缓存在写入两个连续的分配单元后,空一个分配单元,再次填充两个连续的分配单元,空两个分配单元,然后再填充两个连续的分配单元,再空一个分配单元,再次填充两个连续的分配单元,空 两个分配单元,然后再填充两个连续的分配单元,依次类推,按照上述存储规律将文件缓存添加至最低优先级的LRU列表。对于比最低优先级高一级的LRU列表,在空两个连续的分配单元后的一个分配单元内写入文件缓存的数据,再次空两个连续的分配单元,然后将文件缓存的数据存入相邻的下一个分配单元,空三个连续的分配单元,然后将文件缓存的数据存入相邻的下一个分配单元,以此类推,按照上述存储规律将文件缓存添加至高一级的LRU列表。对于比最低优先级高两级的LRU列表,在空六个连续的分配单元后的一个分配单元内写入文件缓存的数据,再次空六个连续的分配单元,然而将文件缓存的数据存入相邻的下一个分配单元,以此类推,按照上述存储规律将文件缓存添加至高两级的LRU列表。按照上述规律将不同优先级的文件缓存添加至不同的LRU列表,以确保同一列中仅有一个LRU列表的分配单元被添加了文件缓存。以设置3个优先级,对应的生成3个LRU列表为例,说明文件缓存的添加情况。表1示出了不活跃的文件缓存的LRU列表构成的表格。
表1、不活跃的文件缓存的LRU列表。
LRU[0]             zZ             Zz
LRU[1]     bB     nN       kK     Oo  
LRU[2] aA Cc   dD Cc     xX eE   Mm Yy    
需要说明的是,文件缓存添加至LRU列表的存储规则并不限于上述示例列举的方式,其它能够保证将列表的头部对齐后,在各个LRU列表构成的表格中,同一列中仅有一个LRU列表的分配单元被添加了文件缓存的规则也属于在本申请记载的存储规则。
在一实施例中,在检测到文件缓存回收事件被触发时,由内存中获取多个具有不同优先级的最近最少使用LRU列表。
在步骤130中,根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
需要说明的是,预先设置多个不同的扫描速度,如各个扫描速度之间存在倍数关系。根据该倍数关系可以为扫描速度排序,得到扫描速度序列。根据LRU列表的优先级也可以为LRU列表排序,得到LRU列表序列,进而,确定LRU列表序列与扫描速度序列的映射关系。例如,优先级最高的LRU列表对应扫描速度序列中最低的速度值,优先级最低的LRU列表对应扫描速度序列中最高的速度值,即按照优先级越高扫描速度越低的方式建立LRU列表序列与扫描速度序列的映射关系。
需要说明的是,同步扫描LRU列表的含义是由各个LRU列表的头部开始, 同时以不同的扫描速度分别对各个LRU列表进行扫描,直至扫描至各个LRU列表的尾部,扫描结束。
在一实施例中,采用各个LRU列表对应的扫描速度同步扫描LRU列表,对扫描到的文件缓存进行回收。例如,假设扫描速度分别为V 0、V 1和V 2,且存在V 0>V 1>V 2,则优先级最高的LRU[0]列表对应最低的扫描速度V 2,优先级次之的LRU[1]列表对应扫描速度V 1,以及,最低优先级的LRU[2]列表对应最高的扫描速度V 0。以各个LRU列表的头部开始,分别采用扫描速度V 2对LRU[0]列表进行扫描,以扫描速度V 1对LRU[1]列表进行扫描,以及扫描速度V 0对LRU[2]列表进行扫描。对于扫描到的文件缓存执行回收操作。其中,对于每个LRU列表均是从0开始为被回收的文件缓存编号的。表2是扫描次数(或称为扫描时间)与LRU列表的关系表。
表2、扫描次数与LRU列表的关系表。
Figure PCTCN2019093720-appb-000001
其中,[0]、[1]、[2]、[3]、[4]、[5]、[6]、[7]……,代表被回收的文件缓存的缓存序号。
如表2所示,随着扫描次数(或看成扫描时间)的增加,各个LRU列表中均有文件缓存被回收。
采用同步扫描的方式,可以保证在回收一定数量的低优先级的LRU列中的文件缓存后,对高优先级的LRU列表中的文件缓存进行回收,避免高优先级的LRU列表中的文件缓存永远不会被回收的情况发生。
需要说明的是,文件缓存在LRU列表中并不是连续存储的,对扫描到的文件缓存进行回收,并不会出现仅回收扫描速度快的LRU[2]列表中的文件缓存的情况。
在一实施例中,由于优先级低的LRU列表的扫描速度比优先级高的LRU列表快,因此,低优先级的LRU列表总是比高优先级的LRU列表先遍历完,因此,可以在最低优先级的LRU列表遍历完之后,自动将比之高一级的LRU列表降级为最低优先级的LRU列表。
本申请实施例的技术方案,若检测到文件缓存回收事件被触发,则获取具有不同优先级的最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;根据 各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。通过采用上述技术方案,可以根据文件缓存被访问的情况以及访问该文件缓存的应用的优先级确定LRU列表的优先级,并基于该优先级确定对LRU列表执行扫描操作的扫描速度,采用该扫描速度同步扫描各个LRU列表,可以避免高优先级的列表中的文件缓存永远不被回收的情况发生,实现更加精确的文件缓存回收控制方式,提高了回收准确率。
在一些实施例中,在检测到文件缓存回收事件被触发之前,还包括:获取文件缓存的被访问频次;确定所述被访问频次小于预设阈值的目标文件缓存,根据所述目标文件缓存的被访问频次为每个目标文件缓存设置优先级;对于优先级相同的所述目标文件缓存,按照预设存储规则存储至同一LRU列表中,并将LRU列表中文件缓存的优先级作为对应的LRU列表的优先级。该附加的技术方案可以看成是文件缓存回收功能的初始化步骤,即统计系统中文件缓存被访问的频次,基于该频次对文件缓存进行分类,将该频次高于预设阈值的文件缓存作为活跃的文件缓存,将该频次低于预设阈值的文件缓存作为不活跃的文件缓存,标记为目标文件缓存。可以将被访问频次较高的目标文件缓存看成是较活跃的文件缓存,为其分配较高的优先级,将被访问频次较低的目标文件缓存看成是较不活跃的文件缓存,为其分配较低的优先级。优先级相同的目标文件缓存被添加至同一LRU列表,并将文件缓存的优先级作为对应的LRU列表的优先级。这样设置的好处在于预先将不活跃的文件缓存添加至不同的LRU列表,以便于在执行同步扫描时可以直接获取该LRU列表,无需在每次检测到文件缓存回收事件时再生成LRU列表,提高了文件缓存的回收效率。
图2是本申请实施例提供的另一种文件缓存的回收方法的流程图,如图2所示,该方法包括:步骤201至步骤209。
在步骤201中,检测到优先级设置事件被触发。
在一实施例中,为用户提供优先级设置功能,并在优先级设置界面中显示设置多少个优先级的选项以及阈值设置选项。用户可以根据自己的需要在优先级设置界面输入设置N个优先级,且输入N-1个阈值,以基于各个阈值分配文件缓存的优先级。如用户输入设置4个优先级Y 1、Y 2、Y 3和Y 4,优先级排序为Y 1>Y 2>Y 3>Y 4,且并输入3个阈值N 1、N 2和N 3,在用户输入完上述信息点击确定时触发优先级设置事件。可以理解的是,用户输入优先级设置信息以及阈值设置信息的方式并不限于手动输入,还可以是语音输入等。
需要说明的是,本申请对如何触发优先级设置事件并不作具体限定。例如,可以是定期触发优先级设置事件等。
在步骤202中,获取文件缓存的被访问频次。
在一实施例中,通常可通过对文件访问的内核信息进行追踪来达到文件访问追踪的目的。例如,采用类似ftrace的Linux内核级的追踪框架对文件访问信息进行追踪。又如,预先基于预设虚拟机编写的预设程序代码,将该预设程序代码插入至文件访问事件对应的待调用函数之前,通过预设程序代码获取待调用函数对应的文件访问信息,采用预设虚拟机对应的存储格式对文件访问信息进行存储。进而,基于文件访问信息确定文件缓存的被访问频次。
在步骤203中,确定所述被访问频次小于预设阈值的目标文件缓存,根据所述目标文件缓存的被访问频次为每个目标文件缓存设置优先级。
在一实施例中,假设用户输入设置4个优先级Y1、Y2、Y3和Y4,优先级排序为Y1>Y2>Y3>Y4,且并输入3个阈值N1、N2和N3,将被访问频次小于N1的文件缓存的优先级设置为Y4,将被访问频次大于或等于N1但小于N2的文件缓存的优先级设置为Y3,将被访问频次大于或等于N2但小于N3的文件缓存的优先级设置为Y2,并将将被访问频次大于N3的文件缓存的优先级设置为Y1。
在步骤204中,对于优先级相同的所述目标文件缓存,按照预设存储规则存储至同一LRU列表中,并将LRU列表中文件缓存的优先级作为对应的LRU列表的优先级。
在一实施例中,将优先级相同的文件缓存添加至同一LRU列表。需要说明的是,同一LRU列表中,文件缓存按照预设存储规则进行存储,实现将列表的头部对齐后,在各个LRU列表构成的表格中,同一列中仅有一个LRU列表的分配单元被添加了文件缓存。
在步骤205中,检测到文件缓存回收事件被触发。
在步骤206中,获取具有不同优先级的最近最少使用LRU列表。
在步骤207中,采用预设的扫描速度同步扫描所述LRU列表。
需要说明的是,对于优先级相邻的两个LRU列表,优先级较高的LRU列表的扫描速度小于优先级较低的LRU列表的扫描速度,从而保证优先级较低的LRU列表和优先级较高的LRU列表中的文件缓存均会被回收,避免优先级较高的LRU列表中的文件缓存不被回收的问题发生;此外,由于采用不同的扫描速度同步扫描优先级较低的LRU列表(扫描速度较快)和优先级较高的LRU列表(扫描速度较慢),保证优先级较低的LRU列表中被回收的文件缓存的数量多于优先级较高的LRU列表,实现最不活跃的文件缓存能够被较多的回收。
在一实施例中,可以将优先级较高的LRU列表的扫描速度设置为优先级较低的LRU列表的扫描速度的一半。
在步骤208中,判断是否扫描到文件缓存,若是,则执行步骤209,否则,返回执行步骤207。
在步骤209中,回收扫描到的所述文件缓存。
本申请实施例的技术方案,在检测到文件缓存回收事件被触发之前,若检测到优先级设置事件被触发,则基于文件缓存的被访问频次确定不活跃的文件缓存,对于不活跃的文件缓存,再根据其被访问的频次分配优先级,实现动态的更新文件缓存的优先级,从而更新LRU列表,可以避免每次检测到文件缓存回收事件时再生成LRU列表,又可以根据阈值设置信息和优先级设置信息动态的调整LRU列表的数量以及存储的文件缓存,使文件缓存的回收更加符合用户的使用习惯。
图3是本申请实施例提供的又一种文件缓存的回收方法的流程图,如图3所示,该方法包括:步骤301至步骤311。
在步骤301中,检测到优先级设置事件被触发。
在步骤302中,获取文件缓存的被访问频次。
在步骤303中,确定所述被访问频次小于预设阈值的目标文件缓存,根据所述目标文件缓存的被访问频次为每个目标文件缓存设置优先级。
在步骤304中,对于优先级相同的所述目标文件缓存,按照预设存储规则存储至同一LRU列表中,并将LRU列表中文件缓存的优先级作为对应的LRU列表的优先级。
在步骤305中,在检测到文件缓存的访问事件时,根据访问所述文件缓存的应用的优先级调整所述文件缓存的优先级。
对于很多操作系统来说,内核一般基于Linux实现,系统底层一般为Linux Kernel,系统会进行内核空间和用户空间的划分,不同的操作系统划分方式或划分结果可能不同。用户空间一般指用户进程所在的内存区域,应用程序运行在用户空间,用户进程的数据存放于用户空间;而内核空间是操作系统占据的内存区域,操作系统和驱动程序运行在内核空间,操作系统的数据存放于系统空间。这样,可以将用户数据和系统数据进行隔离,保证系统的稳定性。一般的,用户空间和内核空间通过系统调用(system call)进行交互,系统调用可以理解为由操作系统实现提供的所有系统调用所构成的集合,即程序接口或应用编程接口(Application Programming Interface,API),是应用程序与系统之间的接口。操作系统的主要功能是为管理硬件资源和为应用程序开发人员提供良好的环境来使应用程序具有更好的兼容性,为了达到这个目的,内核提供一系列具备预定功能的多内核函数,通过一组称为系统调用的接口呈现给用户。系统调用把应用程序的请求传给内核,调用相应的内核函数完成所需的处理,将处理结果返 回给应用程序。
本申请实施例中,在应用程序对文件进行访问时,需要通过系统调用的方式访问内核空间,也即,需要调用相应的系统调用接口访问内核空间,因此,可根据预设文件访问事件对应的系统调用接口是否被调用来判断预设文件缓存访问事件是否被触发,若被调用,则可认为预设文件缓存访问事件被触发。
在一实施例中,基于文件缓存被访问的频次,预先设置小于预设阈值的目标文件缓存的优先级为第一优先级,可以很容易地获取该文件访问事件对应的文件缓存的第一优先级。另外,基于应用程序的重要程序(可以是系统默认的,也可以是用户设置的),为其分配不同的优先级。应用程序的优先级可以是用户设置的,也可以是系统默认设置的。获取访问该文件缓存的应用的第二优先级。比较第一优先级和第二优先级。若第二优先级高于第一优先级,则采用第二优先级替换第一优先级作为该文件缓存的优先级;若第二优先级低于第一优先级,则维持该文件缓存的优先级不变。
在一实施例中,若存在至少两个应用同时访问一个文件缓存,则分别获取至少两个应用的目标优先级。其中,目标优先级分别是各个访问该文件缓存的应用的优先级。比较第一优先级和目标优先级,确定其中最高的优先级,并将该最高的优先级作为该文件缓存的优先级。
在步骤306中,基于优先级调整后的文件缓存更新所述LRU列表。
在一实施例中,若文件缓存的优先级发生变化,则需要将其调整至对应优先级的LRU列表中,即需要基于优先级调整后的文件缓存更新所述LRU列表。
在步骤307中,检测到文件缓存回收事件被触发。
在步骤308中,获取具有不同优先级的最近最少使用LRU列表。
在步骤309中,采用预设的扫描速度同步扫描所述LRU列表。
在步骤310中,判断是否扫描到文件缓存,若是,则执行步骤311,否则,返回执行步骤309。
在步骤311中,回收扫描到的所述文件缓存。
本申请实施例的技术方案,通过在检测到文件缓存的访问事件时,根据访问所述文件缓存的应用的优先级调整所述文件缓存的优先级,建立文件缓存的优先级与使用此文件缓存的应用进程的关联关系,避免仅根据文件最近被访问的情况进行回收控制可能导致的回收不精确的问题发生,实现更加精确的文件缓存回收控制。
图4是本申请实施例提供的一种文件缓存的回收装置的结构框图,该装置可由软件和/或硬件实现,一般集成在终端中,可通过执行文件缓存的回收方法来对文件缓存进行精确回收。如图4所示,该装置包括:事件检测模块410、列 表获取模块420和列表扫描模块430。
事件检测模块410,设置为检测到文件缓存回收事件被触发。
列表获取模块420,设置为获取具有不同优先级的最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的。
列表扫描模块430,设置为根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
本申请实施例中提供的文件缓存的回收装置,若检测到文件缓存回收事件被触发,则获取具有不同优先级的最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。通过采用上述技术方案,可以根据文件缓存被访问的情况以及访问该文件缓存的应用的优先级确定LRU列表的优先级,并基于该优先级确定对LRU列表执行扫描操作的扫描速度,采用该扫描速度同步扫描LRU列表,可以避免高优先级的列表中的文件缓存永远不被回收的情况发生,实现更加精确的文件缓存回收控制方式,提高了回收准确率。
在一实施例中,还包括:LRU列表生成模块。
LRU列表生成模块,设置为在检测到文件缓存回收事件被触发之前,获取文件缓存的被访问频次;确定所述被访问频次小于预设阈值的目标文件缓存,根据所述目标文件缓存的被访问频次为每个目标文件缓存设置优先级;对于优先级相同的所述目标文件缓存,按照预设存储规则存储至同一LRU列表中,并将LRU列表中文件缓存的优先级作为对应的LRU列表的优先级。
在一实施例中,还包括:优先级调整模块和LRU列表更新模块。
优先级调整模块,设置为在检测到文件缓存的访问事件时,根据访问所述文件缓存的应用的优先级调整所述文件缓存的优先级;
LRU列表更新模块,设置为基于优先级调整后的文件缓存更新所述LRU列表。
在一实施例中,优先级调整模块还设置为:
获取所述文件缓存的访问事件对应的文件缓存的第一优先级;
获取访问所述文件缓存的应用的第二优先级;
在所述第二优先级高于所述第一优先级时,将所述第二优先级作为所述文件缓存的优先级。
在一实施例中,优先级调整模块还设置为:
获取所述文件缓存的访问事件对应的文件缓存的第一优先级;
在至少两个应用同时访问所述文件缓存时,分别获取所述至少两个应用的目标优先级;
比较第一优先级和目标优先级,将最高的优先级作为所述文件缓存的优先级。
在一实施例中,列表扫描模块430设置为:
根据每个LRU列表的优先级匹配对应的扫描速度,其中,对于优先级相邻的两个LRU列表,优先级较高的LRU列表的扫描速度小于优先级较低的LRU列表的扫描速度;
采用所述扫描速度同步扫描所述LRU列表;
若扫描到文件缓存,则回收扫描到的所述文件缓存。
在一实施例中,优先级较高的LRU列表的扫描速度是优先级较低的LRU列表的扫描速度的一半。
在一实施例中,所述文件缓存被访问的情况包括:所述文件缓存被访问频次或所述文件缓存被访问时间。
在一实施例中,所述列表扫描模块还设置为:由多个LRU列表的头部开始,同时以不同的扫描速度分别对所述多个LRU列表进行扫描,直至扫描至所述多个LRU列表的尾部。
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行文件缓存的回收方法,该方法包括:
检测到文件缓存回收事件被触发;
获取具有不同优先级的最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;
根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
存储介质——任何的各种类型的存储器设备或存储设备。术语“存储介质”旨在包括:安装介质,例如只读光盘(Compact Disc Read-Only Memory,CD-ROM)、软盘或磁带装置;计算机系统存储器或随机存取存储器,诸如动态随机存取存储器(Dynamic Random Access Memory,DRAM)、双倍速率同步动态随机存储器(Double Date Rate Synchronous Dynamic Random Access Memory,DDR RAM)、静态随机存取存储器(Static Random Access Memory,SRAM)、 扩展式数据输出随机存取存储器(Extended Data Output Random Access Memory,EDO RAM),兰巴斯(Rambus)RAM等;非易失性存储器,诸如闪存、磁介质(例如硬盘或光存储);寄存器或其它相似类型的存储器元件等。存储介质可以还包括其它类型的存储器或其组合。另外,存储介质可以位于程序在其中被执行的第一计算机系统中,或者可以位于不同的第二计算机系统中,第二计算机系统通过网络(诸如因特网)连接到第一计算机系统。第二计算机系统可以提供程序指令给第一计算机用于执行。术语“存储介质”可以包括可以驻留在不同位置中(例如在通过网络连接的不同计算机系统中)的两个或更多存储介质。存储介质可以存储可由一个或多个处理器执行的程序指令(例如具体实现为计算机程序)。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的文件缓存的回收操作,还可以执行本申请任意实施例所提供的文件缓存的回收方法中的相关操作。
本申请实施例提供了一种终端,该终端内具有操作系统,该终端中可集成本申请实施例提供的文件缓存的回收装置。图5是本申请实施例提供的一种终端的结构示意图。如图5所示,该终端包括存储器510及处理器520。所述存储器510,设置为存储计算机程序及LRU列表等;所述处理器520读取并执行所述存储器510中存储的计算机程序。所述处理器520在执行所述计算机程序时实现以下步骤:检测到文件缓存回收事件被触发;获取具有不同优先级的最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;根据各个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
上述示例中列举的存储器及处理器均为终端的部分元器件,所述终端还可以包括其它元器件。以智能手机为例,说明上述终端可能的结构。图6是本申请实施例提供的一种智能手机的结构框图。如图6所示,该智能手机可以包括:存储器601、中央处理器(Central Processing Unit,CPU)602(又称处理器,以下简称CPU)、外设接口603、射频(Radio Frequency,RF)电路605、音频电路606、扬声器611、触摸屏612、电源管理芯片608、输入/输出(I/O)子系统609、其他输入/控制设备610以及外部端口604,这些部件通过一个或多个通信总线或信号线607来通信。
应该理解的是,图示智能手机600仅仅是终端的一个范例,并且智能手机600可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实 现。
下面就本实施例提供的集成有文件缓存的回收装置的智能手机进行详细的描述。
存储器601,所述存储器601可以被CPU602、外设接口603等访问,所述存储器601可以包括高速随机存取存储器,还可以包括非易失性存储器,例如一个或多个磁盘存储器件、闪存器件、或其他易失性固态存储器件。在存储器601中存储计算机程序,还可以存储预设文件及预设白名单等。
外设接口603,所述外设接口603可以将设备的输入和输出外设连接到CPU602和存储器601。
I/O子系统609,所述I/O子系统609可以将设备上的输入输出外设,例如触摸屏612和其他输入/控制设备610,连接到外设接口603。I/O子系统609可以包括显示控制器6091和用于控制其他输入/控制设备610的一个或多个输入控制器6092。其中,一个或多个输入控制器6092从其他输入/控制设备610接收电信号或者向其他输入/控制设备610发送电信号,其他输入/控制设备610可以包括物理按钮(按压按钮、摇臂按钮等)、拨号盘、滑动开关、操纵杆、点击滚轮。值得说明的是,输入控制器6092可以与以下任一个连接:键盘、红外端口、USB接口以及诸如鼠标的指示设备。
触摸屏612,所述触摸屏612是用户终端与用户之间的输入接口和输出接口,将可视输出显示给用户,可视输出可以包括图形、文本、图标、视频等。
I/O子系统609中的显示控制器6091从触摸屏612接收电信号或者向触摸屏612发送电信号。触摸屏612检测触摸屏上的接触,显示控制器6091将检测到的接触转换为与显示在触摸屏612上的用户界面对象的交互,即实现人机交互,显示在触摸屏612上的用户界面对象可以是运行游戏的图标、联网到相应网络的图标等。值得说明的是,设备还可以包括光鼠,光鼠是不显示可视输出的触摸敏感表面,或者是由触摸屏形成的触摸敏感表面的延伸。
RF电路605,主要设置为建立手机与无线网络(即网络侧)的通信,实现手机与无线网络的数据接收和发送。例如收发短信息、电子邮件等。具体地,RF电路605接收并发送RF信号,RF信号也称为电磁信号,RF电路605将电信号转换为电磁信号或将电磁信号转换为电信号,并且通过该电磁信号与通信网络以及其他设备进行通信。RF电路605可以包括用于执行这些功能的已知电路,其包括但不限于天线系统、RF收发机、一个或多个放大器、调谐器、一个或多个振荡器、数字信号处理器、CODEC(COder-DECoder,编译码器)芯片组、用户标识模块(Subscriber Identity Module,SIM)等等。
音频电路606,主要设置为从外设接口603接收音频数据,将该音频数据转 换为电信号,并且将该电信号发送给扬声器611。
扬声器611,设置为将手机通过RF电路605从无线网络接收的语音信号,还原为声音并向用户播放该声音。
电源管理芯片608,设置为为CPU602、I/O子系统及外设接口所连接的硬件进行供电及电源管理。
本申请实施例提供的终端,可以根据文件缓存被访问的情况以及访问该文件缓存的应用的优先级确定LRU列表的优先级,并基于该优先级确定对LRU列表执行扫描操作的扫描速度,采用该扫描速度同步扫描LRU列表,可以避免高优先级的列表中的文件缓存永远不被回收的情况发生,实现更加精确的文件缓存回收控制方式,提高了回收准确率。
上述实施例中提供的文件缓存的回收装置、存储介质及终端可执行本申请任意实施例所提供的文件缓存的回收方法,具备执行该方法相应的功能模块和有益效果。未在上述实施例中详尽描述的技术细节,可参见本申请任意实施例所提供的文件缓存的回收方法。

Claims (20)

  1. 一种文件缓存的回收方法,包括:
    检测到文件缓存回收事件被触发;
    获取具有不同优先级的多个最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;
    根据多个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
  2. 根据权利要求1所述的方法,在检测到文件缓存回收事件被触发之前,还包括:
    获取文件缓存的被访问频次;
    确定所述被访问频次小于预设阈值的目标文件缓存,根据所述目标文件缓存的被访问频次为每个目标文件缓存设置优先级;
    对于优先级相同的所述目标文件缓存,按照预设存储规则存储至同一LRU列表中,并将LRU列表中文件缓存的优先级作为对应的LRU列表的优先级。
  3. 根据权利要求2所述的方法,还包括:
    在检测到文件缓存的访问事件的情况下,根据访问所述文件缓存的应用的优先级调整所述文件缓存的优先级;
    基于优先级调整后的文件缓存更新所述LRU列表。
  4. 根据权利要求3所述的方法,其中,根据访问所述文件缓存的应用的优先级调整所述文件缓存的优先级,包括:
    获取所述文件缓存的访问事件对应的文件缓存的第一优先级;
    获取访问所述文件缓存的应用的第二优先级;
    在所述第二优先级高于所述第一优先级的情况下,将所述第二优先级作为所述文件缓存的优先级。
  5. 根据权利要求3所述的方法,其中,根据访问所述文件缓存的应用的优先级调整所述文件缓存的优先级,包括:
    获取所述文件缓存的访问事件对应的文件缓存的第一优先级;
    在至少两个应用同时访问所述文件缓存的情况下,分别获取所述至少两个应用的目标优先级;
    比较第一优先级和目标优先级,将最高的优先级作为所述文件缓存的优先级。
  6. 根据权利要求1至5中任一项所述的方法,其中,根据多个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收,包括:
    根据每个LRU列表的优先级匹配对应的扫描速度,其中,对于优先级相邻的两个LRU列表,优先级较高的LRU列表的扫描速度小于优先级较低的LRU列表的扫描速度;
    采用所述扫描速度同步扫描所述LRU列表;
    在扫描到文件缓存的情况下,回收扫描到的所述文件缓存。
  7. 根据权利要求6所述的方法,其中,优先级较高的LRU列表的扫描速度小于优先级较低的LRU列表的扫描速度,包括:
    优先级较高的LRU列表的扫描速度是优先级较低的LRU列表的扫描速度的一半。
  8. 根据权利要求1-7任一项所述的方法,其中,所述文件缓存被访问的情况包括:所述文件缓存被访问频次或所述文件缓存被访问时间。
  9. 根据权利要求1所述的方法,其中,所述采用所述扫描速度同步扫描所述LRU列表,包括:
    由多个LRU列表的头部开始,同时以不同的扫描速度分别对所述多个LRU列表进行扫描,直至扫描至所述多个LRU列表的尾部。
  10. 一种文件缓存的回收装置,包括:
    事件检测模块,设置为检测到文件缓存回收事件被触发;
    列表获取模块,设置为获取具有不同优先级的多个最近最少使用LRU列表,其中,所述LRU列表的优先级是根据文件缓存被访问的情况以及访问所述文件缓存的应用的优先级确定的;
    列表扫描模块,设置为根据多个LRU列表的优先级确定每个LRU列表对应的扫描速度,采用所述扫描速度同步扫描所述LRU列表,对扫描到的文件缓存进行回收。
  11. 根据权利要求10所述的装置,还包括:
    LRU列表生成模块,设置为获取文件缓存的被访问频次;确定所述被访问频次小于预设阈值的目标文件缓存,根据所述目标文件缓存的被访问频次为每个目标文件缓存设置优先级;对于优先级相同的所述目标文件缓存,按照预设存储规则存储至同一LRU列表中,并将LRU列表中文件缓存的优先级作为对 应的LRU列表的优先级。
  12. 根据权利要求11所述的装置,还包括:
    优先级调整模块,设置为在检测到文件缓存的访问事件的情况下,根据访问所述文件缓存的应用的优先级调整所述文件缓存的优先级;
    LRU列表更新模块,设置为基于优先级调整后的文件缓存更新所述LRU列表。
  13. 根据权利要求12所述的装置,其中,所述优先级调整模块还设置为:
    获取所述文件缓存的访问事件对应的文件缓存的第一优先级;
    获取访问所述文件缓存的应用的第二优先级;
    在所述第二优先级高于所述第一优先级的情况下,将所述第二优先级作为所述文件缓存的优先级。
  14. 根据权利要求12所述的装置,其中,优先级调整模块还设置为:
    获取所述文件缓存的访问事件对应的文件缓存的第一优先级;
    在至少两个应用同时访问所述文件缓存的情况下,分别获取所述至少两个应用的目标优先级;
    比较第一优先级和目标优先级,将最高的优先级作为所述文件缓存的优先级。
  15. 根据权利要求10-14所述的装置,其中,所述列表扫描模块设置为:
    根据每个LRU列表的优先级匹配对应的扫描速度,其中,对于优先级相邻的两个LRU列表,优先级较高的LRU列表的扫描速度小于优先级较低的LRU列表的扫描速度;
    采用所述扫描速度同步扫描所述LRU列表;
    在扫描到文件缓存的情况下,回收扫描到的所述文件缓存。
  16. 根据权利要求15所述的装置,其中,所述列表扫描模块还设置为:优先级较高的LRU列表的扫描速度是优先级较低的LRU列表的扫描速度的一半。
  17. 根据权利要求10-16任一项所述的装置,其中,所述文件缓存被访问的情况包括:所述文件缓存被访问频次或所述文件缓存被访问时间。
  18. 根据权利要求10所述的装置,其中,所述列表扫描模块还设置为:
    由多个LRU列表的头部开始,同时以不同的扫描速度分别对所述多个LRU列表进行扫描,直至扫描至所述多个LRU列表的尾部。
  19. 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如权利要求1至7中任一所述的文件缓存的回收方法。
  20. 一种终端,包括存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至9中任一所述的文件缓存的回收方法。
PCT/CN2019/093720 2018-09-26 2019-06-28 文件缓存的回收方法以及装置、存储介质及终端 WO2020062986A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811125065.3A CN110955614B (zh) 2018-09-26 2018-09-26 文件缓存的回收方法、装置、存储介质及终端
CN201811125065.3 2018-09-26

Publications (1)

Publication Number Publication Date
WO2020062986A1 true WO2020062986A1 (zh) 2020-04-02

Family

ID=69950249

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093720 WO2020062986A1 (zh) 2018-09-26 2019-06-28 文件缓存的回收方法以及装置、存储介质及终端

Country Status (2)

Country Link
CN (1) CN110955614B (zh)
WO (1) WO2020062986A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444024A (zh) * 2020-04-13 2020-07-24 维沃移动通信有限公司 请求响应方法、电子设备及存储介质
CN111488316A (zh) * 2020-04-12 2020-08-04 杭州迪普科技股份有限公司 文件缓存回收方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112947859A (zh) * 2021-02-26 2021-06-11 拉卡拉支付股份有限公司 临时文件处理方法、装置、电子设备、介质及程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110320720A1 (en) * 2010-06-23 2011-12-29 International Business Machines Corporation Cache Line Replacement In A Symmetric Multiprocessing Computer
CN102819586A (zh) * 2012-07-31 2012-12-12 北京网康科技有限公司 一种基于高速缓存的url分类方法和设备
CN103034586A (zh) * 2012-11-30 2013-04-10 记忆科技(深圳)有限公司 通过闪存转换层识别上层应用的方法及其系统
CN106126434A (zh) * 2016-06-22 2016-11-16 中国科学院计算技术研究所 中央处理器的缓存区的缓存行的替换方法及其装置
CN106843756A (zh) * 2017-01-13 2017-06-13 中国科学院信息工程研究所 基于页面分类的内存页面回收方法及系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62192834A (ja) * 1986-02-20 1987-08-24 Nec Corp Lru制御方式
US5625824A (en) * 1995-03-03 1997-04-29 Compaq Computer Corporation Circuit for selectively preventing a microprocessor from posting write cycles
CN103797470B (zh) * 2011-09-16 2017-02-15 日本电气株式会社 存储系统
CN103019962B (zh) * 2012-12-21 2016-03-30 华为技术有限公司 数据缓存处理方法、装置以及系统
CN107179878B (zh) * 2016-03-11 2021-03-19 伊姆西Ip控股有限责任公司 基于应用优化的数据存储的方法和装置
CN107885666B (zh) * 2016-09-28 2021-07-20 华为技术有限公司 一种内存管理方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110320720A1 (en) * 2010-06-23 2011-12-29 International Business Machines Corporation Cache Line Replacement In A Symmetric Multiprocessing Computer
CN102819586A (zh) * 2012-07-31 2012-12-12 北京网康科技有限公司 一种基于高速缓存的url分类方法和设备
CN103034586A (zh) * 2012-11-30 2013-04-10 记忆科技(深圳)有限公司 通过闪存转换层识别上层应用的方法及其系统
CN106126434A (zh) * 2016-06-22 2016-11-16 中国科学院计算技术研究所 中央处理器的缓存区的缓存行的替换方法及其装置
CN106843756A (zh) * 2017-01-13 2017-06-13 中国科学院信息工程研究所 基于页面分类的内存页面回收方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488316A (zh) * 2020-04-12 2020-08-04 杭州迪普科技股份有限公司 文件缓存回收方法及装置
CN111488316B (zh) * 2020-04-12 2023-09-22 杭州迪普科技股份有限公司 文件缓存回收方法及装置
CN111444024A (zh) * 2020-04-13 2020-07-24 维沃移动通信有限公司 请求响应方法、电子设备及存储介质
CN111444024B (zh) * 2020-04-13 2024-04-12 维沃移动通信有限公司 请求响应方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN110955614B (zh) 2022-05-03
CN110955614A (zh) 2020-04-03

Similar Documents

Publication Publication Date Title
US11397590B2 (en) Method for preloading application, storage medium, and terminal
US11442747B2 (en) Method for establishing applications-to-be preloaded prediction model based on preorder usage sequence of foreground application, storage medium, and terminal
KR102206364B1 (ko) 메모리 리클레임 방법 및 장치
WO2020062986A1 (zh) 文件缓存的回收方法以及装置、存储介质及终端
KR101999132B1 (ko) 가상 머신 환경에서 메모리 관리 방법 및 장치
EP2989536B1 (en) Management of access to a hybrid drive in power saving mode
US10416932B2 (en) Dirty data management for hybrid drives
US20200201536A1 (en) Black screen gesture detection method and device, storage medium, and mobile terminal
WO2019119984A1 (en) Method for preloading application, storage medium, and terminal device
WO2020062985A1 (zh) 块设备访问追踪方法以及装置、存储介质及终端
US11704240B2 (en) Garbage data scrubbing method, and device
US20190370009A1 (en) Intelligent swap for fatigable storage mediums
CN111274160A (zh) 数据存储方法、电子设备及介质
CN107003940B (zh) 用于在非统一存储器架构中提供改进的延迟的系统和方法
CN107111560B (zh) 用于在非统一存储器架构中提供改进的延迟的系统和方法
CN116880746A (zh) 数据处理方法、装置、电子设备及计算机可读存储介质
CN114402280A (zh) 一种屏幕参数调整方法、装置及终端设备
CN108228340B (zh) 终端控制方法及装置、终端设备及计算机可读存储介质
US11237741B2 (en) Electronic device and control method for controlling memory
CN110955486B (zh) 文件缓存效率的追踪方法、装置、存储介质及终端
CN113032290A (zh) 闪存配置方法、装置、电子设备和存储介质
CN111580739B (zh) 按键的触控区域的动态调整方法、装置及虚拟键盘
CN117648266A (zh) 一种数据缓存方法、系统、设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19866270

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19866270

Country of ref document: EP

Kind code of ref document: A1