WO2017156683A1 - Linked list-based application cache management method and device - Google Patents

Linked list-based application cache management method and device Download PDF

Info

Publication number
WO2017156683A1
WO2017156683A1 PCT/CN2016/076296 CN2016076296W WO2017156683A1 WO 2017156683 A1 WO2017156683 A1 WO 2017156683A1 CN 2016076296 W CN2016076296 W CN 2016076296W WO 2017156683 A1 WO2017156683 A1 WO 2017156683A1
Authority
WO
WIPO (PCT)
Prior art keywords
linked list
node
memory
cache management
based application
Prior art date
Application number
PCT/CN2016/076296
Other languages
French (fr)
Chinese (zh)
Inventor
何锐
Original Assignee
深圳创维-Rgb电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维-Rgb电子有限公司 filed Critical 深圳创维-Rgb电子有限公司
Priority to PCT/CN2016/076296 priority Critical patent/WO2017156683A1/en
Priority to AU2016277745A priority patent/AU2016277745B2/en
Priority to US15/414,628 priority patent/US10241927B2/en
Publication of WO2017156683A1 publication Critical patent/WO2017156683A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space

Definitions

  • the present invention relates to the field of storage technologies, and in particular, to a linked list based application cache management method and apparatus.
  • a cache is used to realize storage of various data to achieve high-speed data access.
  • the iOS platform adopting NSCache (NSCache is a convenient class for caching some object classes introduced by iOS) memory caching mechanism does not strictly limit the memory size occupied by cache data.
  • the iOS platform can only delete the cached data that is finally stored if the total memory of the currently stored cached data exceeds the memory limit after storing the new cached data. In this new storage data storage process, the total capacity of the cached data may exceed the system's memory limit, resulting in low storage speed and efficiency of the cached data.
  • the invention provides a linked list based application cache management method and device, which aims to solve the iOS platform adopting NSCache in a new cache data storage process, because the total capacity of the cached data may exceed the system memory limit The technical problem of cache data storage speed and low efficiency.
  • the present invention provides a linked list based application cache management method, and the linked list based application cache management method includes the following steps:
  • the linked list-based application cache management method includes:
  • the linked list node is added to the linked list when the second memory is less than or equal to the maximum memory.
  • the linked list based application cache management method further includes:
  • the method based on the linked list application cache management further includes:
  • the node corresponding to the cached data traversed is set as the head node of the linked list.
  • the data field of the linked list node includes a survival time of the linked list node, and after the step of adding the linked list node to the linked list when the first memory is less than or equal to the maximum memory
  • the linked list based application cache management method further includes:
  • the linked list node is deleted when the linked list node currently reaches the survival time.
  • the present invention further provides a linked list based application cache management apparatus, where the linked list based application cache management apparatus includes:
  • a module is configured to: when receiving the cached data of the application, create a linked list node based on the received cached data, and obtain a memory size of the received cached data,
  • a first obtaining module configured to acquire a maximum memory of the linked list and a current occupied memory size of the linked list
  • a first calculating module configured to add the received memory size of the cached data to a currently occupied memory size of the linked list to obtain a first memory
  • a first adding module configured to add the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
  • the linked list based application cache management device further comprises:
  • a second obtaining module configured to acquire a time interval between a last access time and a current time of each node in the linked list when the first memory is greater than the maximum memory
  • a first deleting module configured to delete a node with the largest time interval in the linked list
  • a second calculating module configured to add the received memory size of the cached data to a current memory size of the linked list after the node is deleted to obtain a second memory
  • a second adding module configured to add the linked list node to the linked list when the second memory is less than or equal to the maximum memory.
  • the linked list based application cache management device further comprises:
  • a third obtaining module configured to acquire, after receiving an access request for the cached data, identifier information of the cached data corresponding to the access request;
  • a traversing module configured to traverse the linked list based on the identification information to access cache data corresponding to the access request.
  • the linked list based application cache management device further comprises:
  • a setting module configured to set a node corresponding to the cached data traversed to a head node of the linked list.
  • the data field of the linked list node includes a survival time of the linked list node
  • the linked list based application cache management device further includes:
  • a determining module configured to determine, according to a creation time of the linked list node, whether the linked list node currently reaches the survival time
  • a second deleting module configured to delete the linked list node when the linked list node currently reaches the surviving time.
  • the invention creates a linked list node based on the received cached data when receiving the cached data of the application, and acquires the received memory size of the cached data; and then acquires the maximum memory of the linked list and the current of the linked list.
  • the occupied memory size; then the received memory size of the cached data is added to the currently occupied memory size of the linked list to obtain a first memory; and finally, when the first memory is less than or equal to the maximum memory, Adding the linked list node to the linked list, adding the received cache data to the linked list when the first memory is less than or equal to the maximum memory, avoiding cache in the linked list during the new cache data storage process
  • the data exceeds the maximum memory of the linked list, which improves the storage speed and efficiency of the cached data.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for managing application cache based on a linked list according to the present invention
  • FIG. 2 is a schematic flowchart of a second embodiment of a method for managing application cache based on a linked list according to the present invention
  • FIG. 3 is a schematic flowchart of a third embodiment of a method for managing application cache based on a linked list according to the present invention.
  • FIG. 4 is a schematic flowchart of a fourth embodiment of a method for managing application cache based on a linked list according to the present invention.
  • FIG. 5 is a schematic diagram of functional modules of a first embodiment of a linked list based application cache management apparatus according to the present invention
  • FIG. 6 is a schematic diagram of functional modules of a second embodiment of a linked list based application cache management apparatus according to the present invention.
  • FIG. 7 is a schematic diagram of functional modules of a third embodiment of a linked list based application cache management apparatus according to the present invention.
  • FIG. 8 is a schematic diagram of functional modules of a fourth embodiment of a linked list based application cache management apparatus according to the present invention.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for managing application cache based on a linked list.
  • the linked list based application cache management method includes:
  • Step S110 when receiving the cached data of the application, creating a linked list node based on the received cached data, and acquiring a memory size of the received cached data;
  • the linked list may be a doubly linked list or a single linked list.
  • Creating a linked list node refers to storing the cached data in a data field of the newly created linked list node, wherein the direct successor of the newly created linked list node is the head node of the current linked list.
  • the memory size of the received cached data is obtained, for example, the memory of the received cached data is 50M or the like.
  • the application cache management method based on the linked list in this embodiment can be applied to an iOS platform, such as an iOS platform for developing various applications of the mobile terminal.
  • Step S120 Obtain a maximum memory of the linked list and a current occupied memory size of the linked list.
  • the maximum memory of the linked list refers to the maximum capacity of the linked list set when the linked list is established, and the currently occupied memory size of the linked list refers to the memory size of all cached data currently stored in the linked list.
  • Step S130 adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain a first memory
  • Step S140 adding the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
  • the linked list is The node serves as the head node of the linked list.
  • the linked list node when the cache data of the application is received, the linked list node is created based on the received cache data, and the received memory size of the cached data is acquired; then the maximum memory of the linked list is obtained and the The currently occupied memory size of the linked list; then the received memory size of the cached data is added to the currently occupied memory size of the linked list to obtain a first memory; finally, the first memory is less than or equal to the maximum
  • the linked list node is added to the linked list, so that the received cache data is added to the linked list when the first memory is less than or equal to the maximum memory, thereby avoiding the new cache data storage process.
  • the cached data in the linked list exceeds the maximum memory of the linked list, which improves the storage speed and efficiency of the cached data.
  • the linked list-based application cache management method further includes:
  • Step S150 When the first memory is greater than the maximum memory, obtain a time interval between a last access time and a current time of each node in the linked list.
  • the last access time of each node in the linked list refers to the access time of the last access of each node in the linked list.
  • Step S160 deleting a node with the largest time interval in the linked list
  • Deleting the node with the largest time interval in the linked list refers to deleting the last accessed access time in the linked list from the current time, that is, deleting the node in the linked list that has not been accessed for the longest time.
  • Step S170 adding the received memory size of the cached data to the currently occupied memory size of the linked list after the node is deleted to obtain a second memory;
  • Step S180 adding the linked list node to the linked list when the second memory is less than or equal to the maximum memory.
  • the linked list node is added to the linked list.
  • the linked list node is used as the head node of the linked list.
  • the time interval between the last access time and the current time of each node in the linked list is obtained; and then the node with the largest time interval in the linked list is deleted.
  • the received memory size of the cached data is added to the currently occupied memory size of the linked list after the node is deleted to obtain a second memory; and finally, when the second memory is less than or equal to the maximum memory Adding the linked list node to the linked list, when the first memory is greater than the maximum memory, first deleting the node with the largest time interval in the linked list to ensure that the second memory is less than or equal to the maximum memory.
  • the linked list node is added to the linked list, thereby further avoiding that the cached data in the linked list exceeds the maximum memory of the linked list in the new cache data storage process, thereby improving the storage speed and efficiency of the cached data.
  • the linked list based application cache management method further includes:
  • Step S190 Acquire, after receiving an access request for the cached data, identifier information of the cached data corresponding to the access request;
  • an access request for the cached data is generated, and the access request carries the identification information of the cached data, and the access to the cached data is received.
  • the access request is parsed to obtain the identification information of the cached data corresponding to the access request.
  • Step S200 traversing the linked list based on the identifier information to access cache data corresponding to the access request.
  • the linked list is traversed based on the identifier information to access the cached data corresponding to the access request.
  • the iOS platform adopting NSCache caches data access, it needs to match the identification information with the key of the cached data.
  • each cached data key is similar, that is, when there are a large number of similar keys, the system consumes a large amount of time.
  • the matching of the key results in a lower read performance of the cached data, that is, the read speed and efficiency of the cached data are lower.
  • the linked list-based application cache management method further includes: setting a node corresponding to the traversed cache data as a head node of the linked list.
  • the identifier information of the cached data corresponding to the access request is obtained; and then the linked list is traversed based on the identifier information to access the cached data corresponding to the access request, according to the The identification information of the cached data traverses the linked list to implement access of the cached data, thereby improving the speed and efficiency of reading the cached data.
  • a fourth embodiment of the linked list based application cache management method of the present invention is proposed based on the first embodiment.
  • the data field of the linked list node includes the survival time of the linked list node.
  • the linked list-based application cache management method further includes:
  • Step S210 determining, according to the creation time of the linked list node, whether the linked list node currently reaches the survival time
  • the data field of the created linked list node includes the survival time of the linked list node, and is started after the linked list node is added to the linked list to determine whether the linked list node is currently reached. The survival time.
  • Step S220 deleting the linked list node when the linked list node currently reaches the survival time.
  • the link table node is deleted according to the survival time of the linked list node, that is, the timing storage of the cache data is realized, and the access efficiency of the cache data is further improved.
  • FIG. 5 is a schematic diagram of functional modules of a first embodiment of a linked list based application cache management apparatus according to the present invention.
  • the linked list based application cache management device includes:
  • the creating module 110 is configured to: when receiving the cached data of the application, create a linked list node based on the received cached data, and obtain a memory size of the received cached data,
  • the linked list may be a doubly linked list or a single linked list.
  • Creating a linked list node refers to storing the cached data in a data field of the newly created linked list node, wherein the direct successor of the newly created linked list node is the head node of the current linked list.
  • the memory size of the received cached data is obtained, for example, the memory of the received cached data is 50M or the like.
  • the application cache management method based on the linked list in this embodiment can be applied to an iOS platform, such as an iOS platform for developing various applications of the mobile terminal.
  • the first obtaining module 120 is configured to acquire a maximum memory of the linked list and a current occupied memory size of the linked list.
  • the maximum memory of the linked list refers to the maximum capacity of the linked list set when the linked list is established, and the currently occupied memory size of the linked list refers to the memory size of all cached data currently stored in the linked list.
  • the first calculating module 130 is configured to add the received memory size of the cached data to a currently occupied memory size of the linked list to obtain a first memory;
  • the first adding module 140 is configured to add the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
  • the first adding module 140 adds the linked list node to the linked list.
  • the linked list node is used as the head node of the linked list.
  • the creating module 110 creates a linked list node based on the received cached data, and acquires the received memory size of the cached data; the first obtaining module 120 then acquires the The maximum memory of the linked list and the current occupied memory size of the linked list; the first calculating module 130 then adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain a first memory; Finally, when the first memory is less than or equal to the maximum memory, the first adding module 140 adds the linked list node to the linked list, so that when the first memory is less than or equal to the maximum memory, it is received.
  • the cached data is added to the linked list, which avoids the cached data in the linked list exceeding the maximum memory of the linked list in the new cached data storage process, and improves the storage speed and efficiency of the cached data.
  • the linked list based application cache management apparatus further includes:
  • the second obtaining module 150 is configured to acquire, when the first memory is greater than the maximum memory, a time interval between a last access time and a current time of each node in the linked list;
  • the last access time of each node in the linked list refers to the access time of the last access of each node in the linked list.
  • a first deleting module 160 configured to delete a node with the largest time interval in the linked list
  • the first deletion module 160 deletes the node with the largest time interval in the linked list, and refers to the node whose last accessed access time in the linked list is the longest from the current time, that is, the node that has not been accessed for the longest time in the deleted linked list.
  • the second calculating module 170 is configured to add the received memory size of the cached data to the currently occupied memory size of the linked list after the node is deleted to obtain a second memory;
  • the second adding module 180 is configured to add the linked list node to the linked list when the second memory is less than or equal to the maximum memory.
  • the second adding module 180 adds the linked list node to the linked list.
  • the linked list node is used as the head node of the linked list.
  • the second obtaining module 150 acquires a time interval between the last access time and the current time of each node in the linked list; and then the first deleting module 160 Deleting the node with the largest time interval in the linked list; then the second computing module 170 adds the received memory size of the cached data to the currently occupied memory size of the linked list after the node is deleted to obtain the second memory; Finally, when the second memory is less than or equal to the maximum memory, the second adding module 180 adds the linked list node to the linked list, so that when the first memory is greater than the maximum memory, the linked list is first deleted.
  • the node with the largest interval is used to ensure that the second memory is less than or equal to the maximum memory, and finally the linked list node is added to the linked list, thereby further avoiding the cached data in the linked list exceeding the linked list in the new cached data storage process.
  • the maximum memory increases the storage speed and efficiency of cached data.
  • the linked list based application cache management apparatus further includes:
  • the third obtaining module 190 is configured to acquire, after receiving the access request of the cached data, the identifier information of the cached data corresponding to the access request;
  • the cached data in the linked list needs to be called, an access request for the cached data is generated, and the access request carries the identification information of the cached data, and the access to the cached data is received.
  • the third obtaining module 190 parses the access request to obtain the identification information of the cached data corresponding to the access request.
  • the traversing module 200 is configured to traverse the linked list based on the identification information to access cache data corresponding to the access request.
  • the linked list is traversed based on the identifier information to access the cached data corresponding to the access request.
  • the iOS platform adopting NSCache caches data access, it needs to match the identification information with the key of the cached data.
  • each cached data key is similar, that is, when there are a large number of similar keys, the system consumes a large amount of time.
  • the matching of the key results in a lower read performance of the cached data, that is, the read speed and efficiency of the cached data are lower.
  • the linked list-based application cache management apparatus further includes: a setting module, configured to set a node corresponding to the cached data traversed to a head node of the linked list.
  • a node corresponding to the cached data traversed by the setting module as a head node of the linked list, so that when the new cached data is subsequently received, if the first memory is larger than the maximum memory, deleting the a tail node of the linked list; the memory size of the cached data received is added to the current memory size of the linked list after the node is deleted to obtain a second memory; and the second memory is less than or equal to the maximum In the case of memory, the linked list node is added to the linked list.
  • the process of storing the cache data when the first memory is larger than the maximum memory is reduced, and the storage speed and efficiency of the cache data are improved.
  • the third obtaining module 190 acquires the identification information of the cached data corresponding to the access request when receiving the access request of the cached data; and then the traversing module 200 traverses the linked list based on the identifier information to access the access The corresponding cache data is requested, and the cache data access is realized by traversing the linked list according to the identification information of the cache data, thereby improving the speed and efficiency of the cache data reading.
  • a fourth embodiment of the linked list-based application cache management apparatus of the present invention is proposed based on the first embodiment.
  • the data field of the linked list node includes the survival time of the linked list node.
  • the linked list based application cache management device further includes:
  • the determining module 210 is configured to determine, according to the creation time of the linked list node, whether the linked list node currently reaches the survival time;
  • the data field of the created linked list node includes the survival time of the linked list node, and is started after the linked list node is added to the linked list to determine whether the linked list node is currently reached. The survival time.
  • the second deleting module 220 is configured to delete the linked list node when the linked list node currently reaches the lifetime.
  • the determining module 210 determines, according to the creation time of the linked list node, whether the linked list node currently reaches the survival time, and then when the linked list node currently reaches the surviving time, the second deleting module 220 Deleting the linked list node realizes deleting the linked list node according to the survival time of the linked list node, that is, the timing storage of the cached data is realized, and the access efficiency of the cached data is further improved.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention discloses a linked list-based application cache management method, comprising: when the cache data of an application are received, a linked list node is created on the basis of the received cache data, and the memory size of the received cache data is obtained; the maximum memory of the linked list and the size of the currently occupied memory of the linked list are obtained; the memory size of the received cache data and the size of the currently occupied memory of the liked list are added to obtain a first memory; and when the first memory is smaller than or equal to the maximum memory, the linked list node is added into the linked list. The present invention further discloses a linked list-based application cache management device. The present invention implements that the received cache data is added into the linked list when the first memory is smaller than or equal to the maximum memory, thereby preventing the cache data in the linked list during a new cache data storage process from exceeding the maximum memory of the linked list, and improving the storage speed and efficiency of the cache data.

Description

基于链表的应用缓存管理方法及装置  Linked table based application cache management method and device
技术领域Technical field
本发明涉及存储技术领域,尤其涉及一种基于链表的应用缓存管理方法及装置。The present invention relates to the field of storage technologies, and in particular, to a linked list based application cache management method and apparatus.
背景技术Background technique
随着移动通讯技术的快速发展,移动终端的应用也越来越丰富。在采用iOS开发应用的过程中,会用到缓存实现各种数据的存储以达到数据高速存取的目的。With the rapid development of mobile communication technology, the application of mobile terminals is becoming more and more abundant. In the process of developing an application using iOS, a cache is used to realize storage of various data to achieve high-speed data access.
目前,采用NSCache(NSCache是iOS引入的一个方便的缓存某些object的类)内存缓存机制的iOS平台,没有严格限制缓存数据占用的内存大小。iOS平台只能在存储新的缓存数据后,若当前存储的缓存数据总的内存超过内存限制,则删除其中最终存储的缓存数据。采用这种存储方式在新的缓存数据存储过程中,缓存数据的总容量可能超出系统的内存限制,导致缓存数据的存储速度及效率低。At present, the iOS platform adopting NSCache (NSCache is a convenient class for caching some object classes introduced by iOS) memory caching mechanism does not strictly limit the memory size occupied by cache data. The iOS platform can only delete the cached data that is finally stored if the total memory of the currently stored cached data exceeds the memory limit after storing the new cached data. In this new storage data storage process, the total capacity of the cached data may exceed the system's memory limit, resulting in low storage speed and efficiency of the cached data.
发明内容Summary of the invention
本发明提供一种基于链表的应用缓存管理方法及装置,旨在解决采用NSCache的iOS平台在新的缓存数据存储过程中,由于缓存数据的总容量可能超出系统的内存限制而导致 缓存数据的存储速度及效率低的技术问题。The invention provides a linked list based application cache management method and device, which aims to solve the iOS platform adopting NSCache in a new cache data storage process, because the total capacity of the cached data may exceed the system memory limit The technical problem of cache data storage speed and low efficiency.
为实现上述目的,本发明提供的一种基于链表的应用缓存管理方法,所述基于链表的应用缓存管理方法包括以下步骤: To achieve the above objective, the present invention provides a linked list based application cache management method, and the linked list based application cache management method includes the following steps:
在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小, When receiving the cached data of the application, creating a linked list node based on the received cached data, and obtaining a memory size of the received cached data,
获取所述链表的最大内存及所述链表的当前占用的内存大小;Obtaining a maximum memory of the linked list and a current occupied memory size of the linked list;
将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;Adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain a first memory;
在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。Adding the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
优选地,将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存的步骤之后,所述基于链表的应用缓存管理方法包括:Preferably, after the step of adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain the first memory, the linked list-based application cache management method includes:
在所述第一内存大于所述最大内存时,获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;And acquiring, when the first memory is greater than the maximum memory, a time interval between a last access time and a current time of each node in the linked list;
删除所述链表中时间间隔最大的结点;Deleting the node with the largest time interval in the linked list;
将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;Adding the received memory size of the cached data to the current occupied memory size of the linked list after the node is deleted to obtain a second memory;
在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。The linked list node is added to the linked list when the second memory is less than or equal to the maximum memory.
优选地,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表的步骤之后,所述基于链表的应用缓存管理方法还包括:Preferably, after the step of adding the linked list node to the linked list, when the first memory is less than or equal to the maximum memory, the linked list based application cache management method further includes:
在接收到缓存数据的访问请求时,获取所述访问请求对应的缓存数据的标识信息;Obtaining identification information of the cached data corresponding to the access request when receiving the access request of the cached data;
基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据。And traversing the linked list based on the identifier information to access cache data corresponding to the access request.
优选地,基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据的步骤之后,所述基于链表的应用缓存管理方法还包括:Preferably, after the step of traversing the linked list to access the cached data corresponding to the access request, the method based on the linked list application cache management further includes:
将遍历到的缓存数据对应的结点设置为所述链表的头结点。The node corresponding to the cached data traversed is set as the head node of the linked list.
优选地,所述链表结点的数据域包含所述链表结点的存活时间,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表的步骤之后,所述基于链表的应用缓存管理方法还包括:Preferably, the data field of the linked list node includes a survival time of the linked list node, and after the step of adding the linked list node to the linked list when the first memory is less than or equal to the maximum memory The linked list based application cache management method further includes:
基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;Determining whether the linked list node currently reaches the survival time based on a creation time of the linked list node;
在所述链表结点当前达到所述存活时间时,删除所述链表结点。The linked list node is deleted when the linked list node currently reaches the survival time.
此外,为实现上述目的,本发明还提供一种基于链表的应用缓存管理装置,所述基于链表的应用缓存管理装置包括:In addition, in order to achieve the above object, the present invention further provides a linked list based application cache management apparatus, where the linked list based application cache management apparatus includes:
创建模块,用于在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小,a module is configured to: when receiving the cached data of the application, create a linked list node based on the received cached data, and obtain a memory size of the received cached data,
第一获取模块,用于获取所述链表的最大内存及所述链表的当前占用的内存大小;a first obtaining module, configured to acquire a maximum memory of the linked list and a current occupied memory size of the linked list;
第一计算模块,用于将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;a first calculating module, configured to add the received memory size of the cached data to a currently occupied memory size of the linked list to obtain a first memory;
第一添加模块,用于在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。a first adding module, configured to add the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
优选地,所述基于链表的应用缓存管理装置还包括:Preferably, the linked list based application cache management device further comprises:
第二获取模块,用于在所述第一内存大于所述最大内存时,获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;a second obtaining module, configured to acquire a time interval between a last access time and a current time of each node in the linked list when the first memory is greater than the maximum memory;
第一删除模块,用于删除所述链表中时间间隔最大的结点;a first deleting module, configured to delete a node with the largest time interval in the linked list;
第二计算模块,用于将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;a second calculating module, configured to add the received memory size of the cached data to a current memory size of the linked list after the node is deleted to obtain a second memory;
第二添加模块,用于在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。a second adding module, configured to add the linked list node to the linked list when the second memory is less than or equal to the maximum memory.
优选地,所述基于链表的应用缓存管理装置还包括:Preferably, the linked list based application cache management device further comprises:
第三获取模块,用于在接收到缓存数据的访问请求时,获取所述访问请求对应的缓存数据的标识信息;a third obtaining module, configured to acquire, after receiving an access request for the cached data, identifier information of the cached data corresponding to the access request;
遍历模块,用于基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据。And a traversing module, configured to traverse the linked list based on the identification information to access cache data corresponding to the access request.
优选地,所述基于链表的应用缓存管理装置还包括:Preferably, the linked list based application cache management device further comprises:
设置模块,用于将遍历到的缓存数据对应的结点设置为所述链表的头结点。And a setting module, configured to set a node corresponding to the cached data traversed to a head node of the linked list.
优选地,所述链表结点的数据域包含所述链表结点的存活时间,所述基于链表的应用缓存管理装置还包括:Preferably, the data field of the linked list node includes a survival time of the linked list node, and the linked list based application cache management device further includes:
确定模块,用于基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;a determining module, configured to determine, according to a creation time of the linked list node, whether the linked list node currently reaches the survival time;
第二删除模块,用于在所述链表结点当前达到所述存活时间时,删除所述链表结点。And a second deleting module, configured to delete the linked list node when the linked list node currently reaches the surviving time.
本发明通过在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小;接着获取所述链表的最大内存及所述链表的当前占用的内存大小;然后将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;最后在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表,实现了在第一内存小于或等于所述最大内存时将接收到的缓存数据添加至链表中,避免了在新的缓存数据存储过程中链表中缓存数据超过链表的最大内存,提高了缓存数据的存储速度与效率。The invention creates a linked list node based on the received cached data when receiving the cached data of the application, and acquires the received memory size of the cached data; and then acquires the maximum memory of the linked list and the current of the linked list. The occupied memory size; then the received memory size of the cached data is added to the currently occupied memory size of the linked list to obtain a first memory; and finally, when the first memory is less than or equal to the maximum memory, Adding the linked list node to the linked list, adding the received cache data to the linked list when the first memory is less than or equal to the maximum memory, avoiding cache in the linked list during the new cache data storage process The data exceeds the maximum memory of the linked list, which improves the storage speed and efficiency of the cached data.
附图说明DRAWINGS
图1为本发明基于链表的应用缓存管理方法第一实施例的流程示意图;1 is a schematic flowchart diagram of a first embodiment of a method for managing application cache based on a linked list according to the present invention;
图2为本发明基于链表的应用缓存管理方法第二实施例的流程示意图;2 is a schematic flowchart of a second embodiment of a method for managing application cache based on a linked list according to the present invention;
图3为本发明基于链表的应用缓存管理方法第三实施例的流程示意图;3 is a schematic flowchart of a third embodiment of a method for managing application cache based on a linked list according to the present invention;
图4为本发明基于链表的应用缓存管理方法第四实施例的流程示意图;4 is a schematic flowchart of a fourth embodiment of a method for managing application cache based on a linked list according to the present invention;
图5为本发明基于链表的应用缓存管理装置第一实施例的功能模块示意图;FIG. 5 is a schematic diagram of functional modules of a first embodiment of a linked list based application cache management apparatus according to the present invention; FIG.
图6为本发明基于链表的应用缓存管理装置第二实施例的功能模块示意图;6 is a schematic diagram of functional modules of a second embodiment of a linked list based application cache management apparatus according to the present invention;
图7为本发明基于链表的应用缓存管理装置第三实施例的功能模块示意图;7 is a schematic diagram of functional modules of a third embodiment of a linked list based application cache management apparatus according to the present invention;
图8为本发明基于链表的应用缓存管理装置第四实施例的功能模块示意图。FIG. 8 is a schematic diagram of functional modules of a fourth embodiment of a linked list based application cache management apparatus according to the present invention.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features, and advantages of the present invention will be further described in conjunction with the embodiments.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
本发明提供一种基于链表的应用缓存管理方法。参照图1,图1为本发明基于链表的应用缓存管理方法第一实施例的流程示意图。The invention provides a linked list based application cache management method. Referring to FIG. 1, FIG. 1 is a schematic flowchart diagram of a first embodiment of a method for managing application cache based on a linked list.
在本实施例中,该基于链表的应用缓存管理方法包括:In this embodiment, the linked list based application cache management method includes:
步骤S110,在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小;Step S110, when receiving the cached data of the application, creating a linked list node based on the received cached data, and acquiring a memory size of the received cached data;
本实施例中,链表可以为双向链表或单链表,创建链表结点是指将缓存数据存储至新建的链表结点的数据域,其中,新建链表结点的直接后继为当前链表的头结点。同时,获取接收到的缓存数据的内存大小,譬如接收到的缓存数据的内存为50M等。其中,本实施例基于链表的应用缓存管理方法可以应用与iOS平台,譬如开发移动终端各种应用的iOS平台。In this embodiment, the linked list may be a doubly linked list or a single linked list. Creating a linked list node refers to storing the cached data in a data field of the newly created linked list node, wherein the direct successor of the newly created linked list node is the head node of the current linked list. . At the same time, the memory size of the received cached data is obtained, for example, the memory of the received cached data is 50M or the like. The application cache management method based on the linked list in this embodiment can be applied to an iOS platform, such as an iOS platform for developing various applications of the mobile terminal.
步骤S120,获取所述链表的最大内存及所述链表的当前占用的内存大小;Step S120: Obtain a maximum memory of the linked list and a current occupied memory size of the linked list.
其中,所述链表的最大内存是指在建立链表时设置的链表的最大容量,所述链表的当前占用的内存大小是指链表当前存储的所有缓存数据的内存大小。The maximum memory of the linked list refers to the maximum capacity of the linked list set when the linked list is established, and the currently occupied memory size of the linked list refers to the memory size of all cached data currently stored in the linked list.
步骤S130,将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;Step S130, adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain a first memory;
步骤S140,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。Step S140, adding the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
在所述第一内存小于或等于所述最大内存时,表示链表当前存储的缓存数据未超过其最大容量,将所述链表结点添加至所述链表,本实施例中,具体是指将链表结点作为所述链表的头结点。When the first memory is less than or equal to the maximum memory, it indicates that the cache data currently stored in the linked list does not exceed its maximum capacity, and the linked list node is added to the linked list. In this embodiment, specifically, the linked list is The node serves as the head node of the linked list.
本实施例中,通过在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小;接着获取所述链表的最大内存及所述链表的当前占用的内存大小;然后将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;最后在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表,实现了在第一内存小于或等于所述最大内存时将接收到的缓存数据添加至链表中,避免了在新的缓存数据存储过程中链表中缓存数据超过链表的最大内存,提高了缓存数据的存储速度与效率。In this embodiment, when the cache data of the application is received, the linked list node is created based on the received cache data, and the received memory size of the cached data is acquired; then the maximum memory of the linked list is obtained and the The currently occupied memory size of the linked list; then the received memory size of the cached data is added to the currently occupied memory size of the linked list to obtain a first memory; finally, the first memory is less than or equal to the maximum When the memory is added, the linked list node is added to the linked list, so that the received cache data is added to the linked list when the first memory is less than or equal to the maximum memory, thereby avoiding the new cache data storage process. The cached data in the linked list exceeds the maximum memory of the linked list, which improves the storage speed and efficiency of the cached data.
基于第一实施例提出本发明基于链表的应用缓存管理方法的第二实施例,参照图2,在本实施例中,在步骤S130之后,该基于链表的应用缓存管理方法还包括:The second embodiment of the linked list-based application cache management method of the present invention is proposed based on the first embodiment. Referring to FIG. 2, in the embodiment, after the step S130, the linked list-based application cache management method further includes:
步骤S150,在所述第一内存大于所述最大内存时,获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;Step S150: When the first memory is greater than the maximum memory, obtain a time interval between a last access time and a current time of each node in the linked list.
其中,链表中各个结点的上一次访问时间是指链表中各个结点最后一次被访问的访问时间。The last access time of each node in the linked list refers to the access time of the last access of each node in the linked list.
步骤S160,删除所述链表中时间间隔最大的结点;Step S160, deleting a node with the largest time interval in the linked list;
删除所述链表中时间间隔最大的结点是指删除链表中最后一次被访问的访问时间距当前时间最久的结点,即删除链表中最久没有被访问的结点。Deleting the node with the largest time interval in the linked list refers to deleting the last accessed access time in the linked list from the current time, that is, deleting the node in the linked list that has not been accessed for the longest time.
步骤S170,将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;Step S170, adding the received memory size of the cached data to the currently occupied memory size of the linked list after the node is deleted to obtain a second memory;
步骤S180,在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。Step S180, adding the linked list node to the linked list when the second memory is less than or equal to the maximum memory.
链表中时间间隔最大的结点被删除以后,在所述第二内存小于或等于所述最大内存时,即链表的当前占用的内存大小与接收到的所述缓存数据的内存大小之和小于最大内存时,将所述链表结点添加至所述链表,本实施例中,具体是指将链表结点作为所述链表的头结点。After the node with the largest time interval in the linked list is deleted, when the second memory is less than or equal to the maximum memory, that is, the sum of the currently occupied memory size of the linked list and the received memory size of the cached data is less than the maximum In the case of the memory, the linked list node is added to the linked list. In this embodiment, specifically, the linked list node is used as the head node of the linked list.
本实施例中,通过在所述第一内存大于所述最大内存时,获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;接着删除所述链表中时间间隔最大的结点;然后将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;最后在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表,实现了在第一内存大于所述最大内存时,首先删除链表中时间间隔最大的结点,以保证第二内存小于或等于所述最大内存,最后将所述链表结点添加至所述链表,进一步避免了在新的缓存数据存储过程中链表中缓存数据超过链表的最大内存,提高了缓存数据的存储速度与效率。In this embodiment, when the first memory is greater than the maximum memory, the time interval between the last access time and the current time of each node in the linked list is obtained; and then the node with the largest time interval in the linked list is deleted. Point; then, the received memory size of the cached data is added to the currently occupied memory size of the linked list after the node is deleted to obtain a second memory; and finally, when the second memory is less than or equal to the maximum memory Adding the linked list node to the linked list, when the first memory is greater than the maximum memory, first deleting the node with the largest time interval in the linked list to ensure that the second memory is less than or equal to the maximum memory. Finally, the linked list node is added to the linked list, thereby further avoiding that the cached data in the linked list exceeds the maximum memory of the linked list in the new cache data storage process, thereby improving the storage speed and efficiency of the cached data.
基于第一实施例提出本发明基于链表的应用缓存管理方法的第三实施例,参照图3,在本实施例中,在步骤S40之后,该基于链表的应用缓存管理方法还包括:The third embodiment of the linked list based application cache management method of the present invention is proposed based on the first embodiment. Referring to FIG. 3, in the embodiment, after the step S40, the linked list based application cache management method further includes:
步骤S190,在接收到缓存数据的访问请求时,获取所述访问请求对应的缓存数据的标识信息;Step S190: Acquire, after receiving an access request for the cached data, identifier information of the cached data corresponding to the access request;
其中,在采用iOS平台开发移动终端各种应用的过程中,需要调用链表内的缓存数据时,生成缓存数据的访问请求,该访问请求携带有缓存数据的标识信息,在接收到缓存数据的访问请求时,解析所述访问请求以获取所述访问请求对应的缓存数据的标识信息。In the process of developing various applications of the mobile terminal by using the iOS platform, when the cached data in the linked list needs to be called, an access request for the cached data is generated, and the access request carries the identification information of the cached data, and the access to the cached data is received. At the time of the request, the access request is parsed to obtain the identification information of the cached data corresponding to the access request.
步骤S200,基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据。Step S200, traversing the linked list based on the identifier information to access cache data corresponding to the access request.
在获得缓存数据的标识信息之后,基于所述标识信息遍历所述链表,以访问所述访问请求对应的缓存数据。现有采用NSCache的iOS平台在缓存数据访问时,需要将标识信息与缓存数据的key进行一一匹配,在各个缓存数据key相似时,即具有大量相似的key时,系统会消耗大量的时间在key的匹配对比上,导致缓存数据的读取性能较低即缓存数据的读取速度及效率较低。After obtaining the identification information of the cached data, the linked list is traversed based on the identifier information to access the cached data corresponding to the access request. When the iOS platform adopting NSCache caches data access, it needs to match the identification information with the key of the cached data. When each cached data key is similar, that is, when there are a large number of similar keys, the system consumes a large amount of time. The matching of the key results in a lower read performance of the cached data, that is, the read speed and efficiency of the cached data are lower.
在其他实施例中,在步骤S200之后,该基于链表的应用缓存管理方法还包括:将遍历到的缓存数据对应的结点设置为所述链表的头结点。In other embodiments, after the step S200, the linked list-based application cache management method further includes: setting a node corresponding to the traversed cache data as a head node of the linked list.
通过将遍历到的缓存数据对应的结点设置为所述链表的头结点,使得在后续接收到新的缓存数据时,若所述第一内存大于所述最大内存,则删除所述链表的尾结点;将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。减少了第一内存大于所述最大内存时存储缓存数据的流程,提高了缓存数据的存储速度及效率。Setting the node corresponding to the cached data to the head node of the linked list, so that when the new cached data is subsequently received, if the first memory is larger than the maximum memory, deleting the linked list a tail node; the memory size of the received cache data is added to the current memory size of the linked list after the node is deleted to obtain a second memory; when the second memory is less than or equal to the maximum memory Adding the linked list node to the linked list. The process of storing the cache data when the first memory is larger than the maximum memory is reduced, and the storage speed and efficiency of the cache data are improved.
本实施例通过在接收到缓存数据的访问请求时,获取所述访问请求对应的缓存数据的标识信息;接着基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据,通过根据缓存数据的标识信息遍历所述链表实现了缓存数据的访问,提高了缓存数据读取的速度及效率。In this embodiment, when the access request of the cached data is received, the identifier information of the cached data corresponding to the access request is obtained; and then the linked list is traversed based on the identifier information to access the cached data corresponding to the access request, according to the The identification information of the cached data traverses the linked list to implement access of the cached data, thereby improving the speed and efficiency of reading the cached data.
基于第一实施例提出本发明基于链表的应用缓存管理方法的第四实施例,参照图4,在本实施例中,所述链表结点的数据域包含所述链表结点的存活时间,在步骤S40之后,该基于链表的应用缓存管理方法还包括:A fourth embodiment of the linked list based application cache management method of the present invention is proposed based on the first embodiment. Referring to FIG. 4, in the embodiment, the data field of the linked list node includes the survival time of the linked list node. After the step S40, the linked list-based application cache management method further includes:
步骤S210,基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;Step S210, determining, according to the creation time of the linked list node, whether the linked list node currently reaches the survival time;
本实施例中,在存储缓存数据时,创建的链表结点的数据域包含所述链表结点的存活时间,在链表结点添加至链表后开始计时,以确定所述链表结点当前是否达到所述存活时间。In this embodiment, when the cache data is stored, the data field of the created linked list node includes the survival time of the linked list node, and is started after the linked list node is added to the linked list to determine whether the linked list node is currently reached. The survival time.
步骤S220,在所述链表结点当前达到所述存活时间时,删除所述链表结点。Step S220, deleting the linked list node when the linked list node currently reaches the survival time.
本实施例通过基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间,接着在所述链表结点当前达到所述存活时间时,删除所述链表结点,实现了根据链表结点的存活时间删除链表结点,即实现了缓存数据的定时存储,进一步提高了缓存数据的存取效率。In this embodiment, by determining, according to the creation time of the linked list node, whether the linked list node currently reaches the survival time, and then deleting the linked list node when the linked list node currently reaches the survival time, The link table node is deleted according to the survival time of the linked list node, that is, the timing storage of the cache data is realized, and the access efficiency of the cache data is further improved.
本发明进一步提供一种基于链表的应用缓存管理装置。参照图5,图5为本发明基于链表的应用缓存管理装置第一实施例的功能模块示意图。The invention further provides a linked list based application cache management device. Referring to FIG. 5, FIG. 5 is a schematic diagram of functional modules of a first embodiment of a linked list based application cache management apparatus according to the present invention.
在本实施例中,该基于链表的应用缓存管理装置包括:In this embodiment, the linked list based application cache management device includes:
创建模块110,用于在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小,The creating module 110 is configured to: when receiving the cached data of the application, create a linked list node based on the received cached data, and obtain a memory size of the received cached data,
本实施例中,链表可以为双向链表或单链表,创建链表结点是指将缓存数据存储至新建的链表结点的数据域,其中,新建链表结点的直接后继为当前链表的头结点。同时,获取接收到的缓存数据的内存大小,譬如接收到的缓存数据的内存为50M等。其中,本实施例基于链表的应用缓存管理方法可以应用与iOS平台,譬如开发移动终端各种应用的iOS平台。In this embodiment, the linked list may be a doubly linked list or a single linked list. Creating a linked list node refers to storing the cached data in a data field of the newly created linked list node, wherein the direct successor of the newly created linked list node is the head node of the current linked list. . At the same time, the memory size of the received cached data is obtained, for example, the memory of the received cached data is 50M or the like. The application cache management method based on the linked list in this embodiment can be applied to an iOS platform, such as an iOS platform for developing various applications of the mobile terminal.
第一获取模块120,用于获取所述链表的最大内存及所述链表的当前占用的内存大小;The first obtaining module 120 is configured to acquire a maximum memory of the linked list and a current occupied memory size of the linked list.
其中,所述链表的最大内存是指在建立链表时设置的链表的最大容量,所述链表的当前占用的内存大小是指链表当前存储的所有缓存数据的内存大小。The maximum memory of the linked list refers to the maximum capacity of the linked list set when the linked list is established, and the currently occupied memory size of the linked list refers to the memory size of all cached data currently stored in the linked list.
第一计算模块130,用于将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;The first calculating module 130 is configured to add the received memory size of the cached data to a currently occupied memory size of the linked list to obtain a first memory;
第一添加模块140,用于在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。The first adding module 140 is configured to add the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
在所述第一内存小于或等于所述最大内存时,表示链表当前存储的缓存数据未超过其最大容量,第一添加模块140将所述链表结点添加至所述链表,本实施例中,具体是指将链表结点作为所述链表的头结点。When the first memory is less than or equal to the maximum memory, indicating that the cache data currently stored in the linked list does not exceed its maximum capacity, the first adding module 140 adds the linked list node to the linked list. In this embodiment, Specifically, the linked list node is used as the head node of the linked list.
本实施例中,通过在接收到应用的缓存数据时,创建模块110基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小;第一获取模块120接着获取所述链表的最大内存及所述链表的当前占用的内存大小;第一计算模块130然后将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;最后在所述第一内存小于或等于所述最大内存时,第一添加模块140将所述链表结点添加至所述链表,实现了在第一内存小于或等于所述最大内存时将接收到的缓存数据添加至链表中,避免了在新的缓存数据存储过程中链表中缓存数据超过链表的最大内存,提高了缓存数据的存储速度与效率。In this embodiment, when the cache data of the application is received, the creating module 110 creates a linked list node based on the received cached data, and acquires the received memory size of the cached data; the first obtaining module 120 then acquires the The maximum memory of the linked list and the current occupied memory size of the linked list; the first calculating module 130 then adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain a first memory; Finally, when the first memory is less than or equal to the maximum memory, the first adding module 140 adds the linked list node to the linked list, so that when the first memory is less than or equal to the maximum memory, it is received. The cached data is added to the linked list, which avoids the cached data in the linked list exceeding the maximum memory of the linked list in the new cached data storage process, and improves the storage speed and efficiency of the cached data.
基于第一实施例提出本发明基于链表的应用缓存管理装置的第二实施例,参照图6,在本实施例中,该基于链表的应用缓存管理装置还包括:The second embodiment of the linked list based application cache management apparatus of the present invention is proposed based on the first embodiment. Referring to FIG. 6, in the embodiment, the linked list based application cache management apparatus further includes:
第二获取模块150,用于在所述第一内存大于所述最大内存时,获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;The second obtaining module 150 is configured to acquire, when the first memory is greater than the maximum memory, a time interval between a last access time and a current time of each node in the linked list;
其中,链表中各个结点的上一次访问时间是指链表中各个结点最后一次被访问的访问时间。The last access time of each node in the linked list refers to the access time of the last access of each node in the linked list.
第一删除模块160,用于删除所述链表中时间间隔最大的结点;a first deleting module 160, configured to delete a node with the largest time interval in the linked list;
第一删除模块160删除所述链表中时间间隔最大的结点是指删除链表中最后一次被访问的访问时间距当前时间最久的结点,即删除链表中最久没有被访问的结点。The first deletion module 160 deletes the node with the largest time interval in the linked list, and refers to the node whose last accessed access time in the linked list is the longest from the current time, that is, the node that has not been accessed for the longest time in the deleted linked list.
第二计算模块170,用于将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;The second calculating module 170 is configured to add the received memory size of the cached data to the currently occupied memory size of the linked list after the node is deleted to obtain a second memory;
第二添加模块180,用于在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。The second adding module 180 is configured to add the linked list node to the linked list when the second memory is less than or equal to the maximum memory.
链表中时间间隔最大的结点被删除以后,在所述第二内存小于或等于所述最大内存时,即链表的当前占用的内存大小与接收到的所述缓存数据的内存大小之和小于最大内存时,第二添加模块180将所述链表结点添加至所述链表,本实施例中,具体是指将链表结点作为所述链表的头结点。After the node with the largest time interval in the linked list is deleted, when the second memory is less than or equal to the maximum memory, that is, the sum of the currently occupied memory size of the linked list and the received memory size of the cached data is less than the maximum In the case of the memory, the second adding module 180 adds the linked list node to the linked list. In this embodiment, specifically, the linked list node is used as the head node of the linked list.
本实施例中,通过在所述第一内存大于所述最大内存时,第二获取模块150获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;接着第一删除模块160删除所述链表中时间间隔最大的结点;然后第二计算模块170将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;最后在所述第二内存小于或等于所述最大内存时,第二添加模块180将所述链表结点添加至所述链表,实现了在第一内存大于所述最大内存时,首先删除链表中时间间隔最大的结点,以保证第二内存小于或等于所述最大内存,最后将所述链表结点添加至所述链表,进一步避免了在新的缓存数据存储过程中链表中缓存数据超过链表的最大内存,提高了缓存数据的存储速度与效率。In this embodiment, when the first memory is greater than the maximum memory, the second obtaining module 150 acquires a time interval between the last access time and the current time of each node in the linked list; and then the first deleting module 160 Deleting the node with the largest time interval in the linked list; then the second computing module 170 adds the received memory size of the cached data to the currently occupied memory size of the linked list after the node is deleted to obtain the second memory; Finally, when the second memory is less than or equal to the maximum memory, the second adding module 180 adds the linked list node to the linked list, so that when the first memory is greater than the maximum memory, the linked list is first deleted. The node with the largest interval is used to ensure that the second memory is less than or equal to the maximum memory, and finally the linked list node is added to the linked list, thereby further avoiding the cached data in the linked list exceeding the linked list in the new cached data storage process. The maximum memory increases the storage speed and efficiency of cached data.
基于第一实施例提出本发明基于链表的应用缓存管理装置的第三实施例,参照图7,在本实施例中,该基于链表的应用缓存管理装置还包括:The third embodiment of the linked list based application cache management apparatus of the present invention is proposed based on the first embodiment. Referring to FIG. 7, in the embodiment, the linked list based application cache management apparatus further includes:
第三获取模块190,用于在接收到缓存数据的访问请求时,获取所述访问请求对应的缓存数据的标识信息;The third obtaining module 190 is configured to acquire, after receiving the access request of the cached data, the identifier information of the cached data corresponding to the access request;
其中,在采用iOS平台开发移动终端各种应用的过程中,需要调用链表内的缓存数据时,生成缓存数据的访问请求,该访问请求携带有缓存数据的标识信息,在接收到缓存数据的访问请求时,第三获取模块190解析所述访问请求以获取所述访问请求对应的缓存数据的标识信息。In the process of developing various applications of the mobile terminal by using the iOS platform, when the cached data in the linked list needs to be called, an access request for the cached data is generated, and the access request carries the identification information of the cached data, and the access to the cached data is received. When requested, the third obtaining module 190 parses the access request to obtain the identification information of the cached data corresponding to the access request.
遍历模块200,用于基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据。The traversing module 200 is configured to traverse the linked list based on the identification information to access cache data corresponding to the access request.
在获得缓存数据的标识信息之后,基于所述标识信息遍历所述链表,以访问所述访问请求对应的缓存数据。现有采用NSCache的iOS平台在缓存数据访问时,需要将标识信息与缓存数据的key进行一一匹配,在各个缓存数据key相似时,即具有大量相似的key时,系统会消耗大量的时间在key的匹配对比上,导致缓存数据的读取性能较低即缓存数据的读取速度及效率较低。After obtaining the identification information of the cached data, the linked list is traversed based on the identifier information to access the cached data corresponding to the access request. When the iOS platform adopting NSCache caches data access, it needs to match the identification information with the key of the cached data. When each cached data key is similar, that is, when there are a large number of similar keys, the system consumes a large amount of time. The matching of the key results in a lower read performance of the cached data, that is, the read speed and efficiency of the cached data are lower.
在其他实施例中,该基于链表的应用缓存管理装置还包括:设置模块,用于将遍历到的缓存数据对应的结点设置为所述链表的头结点。In other embodiments, the linked list-based application cache management apparatus further includes: a setting module, configured to set a node corresponding to the cached data traversed to a head node of the linked list.
通过设置模块将遍历到的缓存数据对应的结点设置为所述链表的头结点,使得在后续接收到新的缓存数据时,若所述第一内存大于所述最大内存,则删除所述链表的尾结点;将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。减少了第一内存大于所述最大内存时存储缓存数据的流程,提高了缓存数据的存储速度及效率。Setting a node corresponding to the cached data traversed by the setting module as a head node of the linked list, so that when the new cached data is subsequently received, if the first memory is larger than the maximum memory, deleting the a tail node of the linked list; the memory size of the cached data received is added to the current memory size of the linked list after the node is deleted to obtain a second memory; and the second memory is less than or equal to the maximum In the case of memory, the linked list node is added to the linked list. The process of storing the cache data when the first memory is larger than the maximum memory is reduced, and the storage speed and efficiency of the cache data are improved.
本实施例通过在接收到缓存数据的访问请求时,第三获取模块190获取所述访问请求对应的缓存数据的标识信息;接着遍历模块200基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据,通过根据缓存数据的标识信息遍历所述链表实现了缓存数据的访问,提高了缓存数据读取的速度及效率。In this embodiment, the third obtaining module 190 acquires the identification information of the cached data corresponding to the access request when receiving the access request of the cached data; and then the traversing module 200 traverses the linked list based on the identifier information to access the access The corresponding cache data is requested, and the cache data access is realized by traversing the linked list according to the identification information of the cache data, thereby improving the speed and efficiency of the cache data reading.
基于第一实施例提出本发明基于链表的应用缓存管理装置的第四实施例,参照图8,在本实施例中,所述链表结点的数据域包含所述链表结点的存活时间,该基于链表的应用缓存管理装置还包括:A fourth embodiment of the linked list-based application cache management apparatus of the present invention is proposed based on the first embodiment. Referring to FIG. 8, in the embodiment, the data field of the linked list node includes the survival time of the linked list node. The linked list based application cache management device further includes:
确定模块210,用于基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;The determining module 210 is configured to determine, according to the creation time of the linked list node, whether the linked list node currently reaches the survival time;
本实施例中,在存储缓存数据时,创建的链表结点的数据域包含所述链表结点的存活时间,在链表结点添加至链表后开始计时,以确定所述链表结点当前是否达到所述存活时间。In this embodiment, when the cache data is stored, the data field of the created linked list node includes the survival time of the linked list node, and is started after the linked list node is added to the linked list to determine whether the linked list node is currently reached. The survival time.
第二删除模块220,用于在所述链表结点当前达到所述存活时间时,删除所述链表结点。The second deleting module 220 is configured to delete the linked list node when the linked list node currently reaches the lifetime.
本实施例通过确定模块210基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间,接着在所述链表结点当前达到所述存活时间时,第二删除模块220删除所述链表结点,实现了根据链表结点的存活时间删除链表结点,即实现了缓存数据的定时存储,进一步提高了缓存数据的存取效率。In this embodiment, the determining module 210 determines, according to the creation time of the linked list node, whether the linked list node currently reaches the survival time, and then when the linked list node currently reaches the surviving time, the second deleting module 220 Deleting the linked list node realizes deleting the linked list node according to the survival time of the linked list node, that is, the timing storage of the cached data is realized, and the access efficiency of the cached data is further improved.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其它变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其它要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device comprising a series of elements includes those elements. It also includes other elements not explicitly listed, or elements that are inherent to such a process, method, article, or device. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, method, item, or device that comprises the element.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, The optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only the preferred embodiments of the present invention, and are not intended to limit the scope of the invention, and the equivalent structure or equivalent process transformations made by the description of the present invention and the drawings are directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of the present invention.

Claims (16)

  1. 一种基于链表的应用缓存管理方法,其特征在于,所述基于链表的应用缓存管理方法包括以下步骤:A linked list based application cache management method, characterized in that the linked list based application cache management method comprises the following steps:
    在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小;When receiving the cached data of the application, creating a linked list node based on the received cached data, and acquiring a memory size of the received cached data;
    获取所述链表的最大内存及所述链表的当前占用的内存大小;Obtaining a maximum memory of the linked list and a current occupied memory size of the linked list;
    将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;Adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain a first memory;
    在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。 Adding the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
  2. 如权利要求1所述的基于链表的应用缓存管理方法,其特征在于,将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存的步骤之后,所述基于链表的应用缓存管理方法包括:The linked list-based application cache management method according to claim 1, wherein after the step of adding the received memory size of the cached data to the currently occupied memory size of the linked list to obtain the first memory, The linked list based application cache management method includes:
    在所述第一内存大于所述最大内存时,获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;And acquiring, when the first memory is greater than the maximum memory, a time interval between a last access time and a current time of each node in the linked list;
    删除所述链表中时间间隔最大的结点;Deleting the node with the largest time interval in the linked list;
    将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;Adding the received memory size of the cached data to the current occupied memory size of the linked list after the node is deleted to obtain a second memory;
    在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。The linked list node is added to the linked list when the second memory is less than or equal to the maximum memory.
  3. 如权利要求1所述的基于链表的应用缓存管理方法,其特征在于,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表的步骤之后,所述基于链表的应用缓存管理方法还包括:The linked list based application cache management method according to claim 1, wherein when the first memory is less than or equal to the maximum memory, the step of adding the linked list node to the linked list is performed. The linked list based application cache management method further includes:
    在接收到缓存数据的访问请求时,获取所述访问请求对应的缓存数据的标识信息;Obtaining identification information of the cached data corresponding to the access request when receiving the access request of the cached data;
    基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据。And traversing the linked list based on the identifier information to access cache data corresponding to the access request.
  4. 如权利要求3所述的基于链表的应用缓存管理方法,其特征在于,基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据的步骤之后,所述基于链表的应用缓存管理方法还包括:The linked list based application cache management method according to claim 3, wherein after the step of traversing the linked list based on the identification information to access cache data corresponding to the access request, the linked list based application cache management The method also includes:
    将遍历到的缓存数据对应的结点设置为所述链表的头结点。The node corresponding to the cached data traversed is set as the head node of the linked list.
  5. 如权利要求1所述的基于链表的应用缓存管理方法,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表的步骤之后,所述基于链表的应用缓存管理方法还包括:The linked list-based application cache management method according to claim 1, wherein the data field of the linked list node includes a lifetime of the linked list node, and the first memory is less than or equal to the maximum memory. After the step of adding the linked list node to the linked list, the linked list based application cache management method further includes:
    基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;Determining whether the linked list node currently reaches the survival time based on a creation time of the linked list node;
    在所述链表结点当前达到所述存活时间时,删除所述链表结点。The linked list node is deleted when the linked list node currently reaches the survival time.
  6. 如权利要求2所述的基于链表的应用缓存管理方法,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表的步骤之后,所述基于链表的应用缓存管理方法还包括:The linked list-based application cache management method according to claim 2, wherein the data field of the linked list node includes a lifetime of the linked list node, and the first memory is less than or equal to the maximum memory. After the step of adding the linked list node to the linked list, the linked list based application cache management method further includes:
    基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;Determining whether the linked list node currently reaches the survival time based on a creation time of the linked list node;
    在所述链表结点当前达到所述存活时间时,删除所述链表结点。The linked list node is deleted when the linked list node currently reaches the survival time.
  7. 如权利要求3所述的基于链表的应用缓存管理方法,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表的步骤之后,所述基于链表的应用缓存管理方法还包括:The linked list-based application cache management method according to claim 3, wherein the data field of the linked list node includes a lifetime of the linked list node, and the first memory is less than or equal to the maximum memory. After the step of adding the linked list node to the linked list, the linked list based application cache management method further includes:
    基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;Determining whether the linked list node currently reaches the survival time based on a creation time of the linked list node;
    在所述链表结点当前达到所述存活时间时,删除所述链表结点。The linked list node is deleted when the linked list node currently reaches the survival time.
  8. 如权利要求4所述的基于链表的应用缓存管理方法,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表的步骤之后,所述基于链表的应用缓存管理方法还包括:The linked list-based application cache management method according to claim 4, wherein the data field of the linked list node includes a lifetime of the linked list node, and the first memory is less than or equal to the maximum memory. After the step of adding the linked list node to the linked list, the linked list based application cache management method further includes:
    基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;Determining whether the linked list node currently reaches the survival time based on a creation time of the linked list node;
    在所述链表结点当前达到所述存活时间时,删除所述链表结点。The linked list node is deleted when the linked list node currently reaches the survival time.
  9. 一种基于链表的应用缓存管理装置,其特征在于,所述基于链表的应用缓存管理装置包括:A linked list based application cache management device, wherein the linked list based application cache management device comprises:
    创建模块,用于在接收到应用的缓存数据时,基于接收到的缓存数据创建链表结点,并获取接收到的所述缓存数据的内存大小,a module is configured to: when receiving the cached data of the application, create a linked list node based on the received cached data, and obtain a memory size of the received cached data,
    第一获取模块,用于获取所述链表的最大内存及所述链表的当前占用的内存大小;a first obtaining module, configured to acquire a maximum memory of the linked list and a current occupied memory size of the linked list;
    第一计算模块,用于将接收到的所述缓存数据的内存大小与所述链表的当前占用的内存大小相加以获得第一内存;a first calculating module, configured to add the received memory size of the cached data to a currently occupied memory size of the linked list to obtain a first memory;
    第一添加模块,用于在所述第一内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。a first adding module, configured to add the linked list node to the linked list when the first memory is less than or equal to the maximum memory.
  10. 如权利要求9所述的基于链表的应用缓存管理装置,其特征在于,所述基于链表的应用缓存管理装置还包括:The linked list based application cache management apparatus according to claim 9, wherein the linked list based application cache management apparatus further comprises:
    第二获取模块,用于在所述第一内存大于所述最大内存时,获取所述链表中各个结点的上一次访问时间与当前时间的时间间隔;a second obtaining module, configured to acquire a time interval between a last access time and a current time of each node in the linked list when the first memory is greater than the maximum memory;
    第一删除模块,用于删除所述链表中时间间隔最大的结点;a first deleting module, configured to delete a node with the largest time interval in the linked list;
    第二计算模块,用于将接收到的所述缓存数据的内存大小与结点删除后所述链表的当前占用的内存大小相加以获得第二内存;a second calculating module, configured to add the received memory size of the cached data to a current memory size of the linked list after the node is deleted to obtain a second memory;
    第二添加模块,用于在所述第二内存小于或等于所述最大内存时,将所述链表结点添加至所述链表。a second adding module, configured to add the linked list node to the linked list when the second memory is less than or equal to the maximum memory.
  11. 如权利要求9所述的基于链表的应用缓存管理装置,其特征在于,所述基于链表的应用缓存管理装置还包括:The linked list based application cache management apparatus according to claim 9, wherein the linked list based application cache management apparatus further comprises:
    第三获取模块,用于在接收到缓存数据的访问请求时,获取所述访问请求对应的缓存数据的标识信息;a third obtaining module, configured to acquire, after receiving an access request for the cached data, identifier information of the cached data corresponding to the access request;
    遍历模块,用于基于所述标识信息遍历所述链表以访问所述访问请求对应的缓存数据。And a traversing module, configured to traverse the linked list based on the identification information to access cache data corresponding to the access request.
  12. 如权利要求11所述的基于链表的应用缓存管理装置,其特征在于,所述基于链表的应用缓存管理装置还包括:The linked list based application cache management apparatus according to claim 11, wherein the linked list based application cache management apparatus further comprises:
    设置模块,用于将遍历到的缓存数据对应的结点设置为所述链表的头结点。And a setting module, configured to set a node corresponding to the cached data traversed to a head node of the linked list.
  13. 如权利要求9所述的基于链表的应用缓存管理装置,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,所述基于链表的应用缓存管理装置还包括:The linked list-based application cache management device according to claim 9, wherein the data field of the linked list node includes a lifetime of the linked list node, and the linked list based application cache management device further comprises:
    确定模块,用于基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;a determining module, configured to determine, according to a creation time of the linked list node, whether the linked list node currently reaches the survival time;
    第二删除模块,用于在所述链表结点当前达到所述存活时间时,删除所述链表结点。And a second deleting module, configured to delete the linked list node when the linked list node currently reaches the surviving time.
  14. 如权利要求10所述的基于链表的应用缓存管理装置,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,所述基于链表的应用缓存管理装置还包括:The linked list-based application cache management device according to claim 10, wherein the data field of the linked list node includes a lifetime of the linked list node, and the linked list based application cache management device further comprises:
    确定模块,用于基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;a determining module, configured to determine, according to a creation time of the linked list node, whether the linked list node currently reaches the survival time;
    第二删除模块,用于在所述链表结点当前达到所述存活时间时,删除所述链表结点。And a second deleting module, configured to delete the linked list node when the linked list node currently reaches the surviving time.
  15. 如权利要求11所述的基于链表的应用缓存管理装置,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,所述基于链表的应用缓存管理装置还包括:The linked list-based application cache management device according to claim 11, wherein the data field of the linked list node includes a lifetime of the linked list node, and the linked list based application cache management device further comprises:
    确定模块,用于基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;a determining module, configured to determine, according to a creation time of the linked list node, whether the linked list node currently reaches the survival time;
    第二删除模块,用于在所述链表结点当前达到所述存活时间时,删除所述链表结点。And a second deleting module, configured to delete the linked list node when the linked list node currently reaches the surviving time.
  16. 如权利要求12所述的基于链表的应用缓存管理装置,其特征在于,所述链表结点的数据域包含所述链表结点的存活时间,所述基于链表的应用缓存管理装置还包括:The linked list-based application cache management device according to claim 12, wherein the data field of the linked list node includes a survival time of the linked list node, and the linked list based application cache management device further comprises:
    确定模块,用于基于所述链表结点的创建时间确定所述链表结点当前是否达到所述存活时间;a determining module, configured to determine, according to a creation time of the linked list node, whether the linked list node currently reaches the survival time;
    第二删除模块,用于在所述链表结点当前达到所述存活时间时,删除所述链表结点。 And a second deleting module, configured to delete the linked list node when the linked list node currently reaches the surviving time.
PCT/CN2016/076296 2016-03-14 2016-03-14 Linked list-based application cache management method and device WO2017156683A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2016/076296 WO2017156683A1 (en) 2016-03-14 2016-03-14 Linked list-based application cache management method and device
AU2016277745A AU2016277745B2 (en) 2016-03-14 2016-03-14 Linked-list-based method and device for application caching management
US15/414,628 US10241927B2 (en) 2016-03-14 2017-01-25 Linked-list-based method and device for application caching management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/076296 WO2017156683A1 (en) 2016-03-14 2016-03-14 Linked list-based application cache management method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/414,628 Continuation US10241927B2 (en) 2016-03-14 2017-01-25 Linked-list-based method and device for application caching management

Publications (1)

Publication Number Publication Date
WO2017156683A1 true WO2017156683A1 (en) 2017-09-21

Family

ID=59851471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/076296 WO2017156683A1 (en) 2016-03-14 2016-03-14 Linked list-based application cache management method and device

Country Status (2)

Country Link
AU (1) AU2016277745B2 (en)
WO (1) WO2017156683A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144892A (en) * 2018-08-27 2019-01-04 南京国电南自轨道交通工程有限公司 A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data
US10241927B2 (en) * 2016-03-14 2019-03-26 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Linked-list-based method and device for application caching management

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7536529B1 (en) * 2005-06-10 2009-05-19 American Megatrends, Inc. Method, system, apparatus, and computer-readable medium for provisioning space in a data storage system
CN103984639A (en) * 2014-04-29 2014-08-13 宁波三星电气股份有限公司 Dynamic memory distributing method
CN104850507A (en) * 2014-02-18 2015-08-19 腾讯科技(深圳)有限公司 Data caching method and data caching device
CN105786723A (en) * 2016-03-14 2016-07-20 深圳创维-Rgb电子有限公司 Application cache management method and device based on linked list

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7536529B1 (en) * 2005-06-10 2009-05-19 American Megatrends, Inc. Method, system, apparatus, and computer-readable medium for provisioning space in a data storage system
CN104850507A (en) * 2014-02-18 2015-08-19 腾讯科技(深圳)有限公司 Data caching method and data caching device
CN103984639A (en) * 2014-04-29 2014-08-13 宁波三星电气股份有限公司 Dynamic memory distributing method
CN105786723A (en) * 2016-03-14 2016-07-20 深圳创维-Rgb电子有限公司 Application cache management method and device based on linked list

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10241927B2 (en) * 2016-03-14 2019-03-26 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Linked-list-based method and device for application caching management
CN109144892A (en) * 2018-08-27 2019-01-04 南京国电南自轨道交通工程有限公司 A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data

Also Published As

Publication number Publication date
AU2016277745B2 (en) 2021-02-11
AU2016277745A1 (en) 2017-09-28

Similar Documents

Publication Publication Date Title
WO2019037396A1 (en) Account clearing method, device and equipment and storage medium
WO2013131444A1 (en) Content sharing method, terminal, server, and system, and computer storage medium
WO2020224247A1 (en) Blockchain–based data provenance method, apparatus and device, and readable storage medium
WO2018126888A1 (en) Method and apparatus for configuring a shortcut for a television function
WO2017041538A1 (en) Terminal user interface controlled display method and device
WO2015196960A1 (en) Method and system for checking security of url for mobile terminal
WO2018023926A1 (en) Interaction method and system for television and mobile terminal
WO2019051902A1 (en) Terminal control method, air conditioner and computer-readable storage medium
WO2015144089A1 (en) Application recommending method and apparatus
WO2019161615A1 (en) Bill entry method, system, optical character recognition server and storage medium
WO2015120774A1 (en) Network access method and apparatus applied to mobile application
WO2017071352A1 (en) Password push method, push system, and terminal device
WO2012028079A1 (en) Method and device for importing backup data of mobile terminal
WO2016058258A1 (en) Terminal remote control method and system
WO2019169814A1 (en) Method, apparatus and device for automatically generating chinese annotation, and storage medium
WO2019085301A1 (en) Missed call feedback method, apparatus and device for fixed phone, and readable storage medium
WO2018233301A1 (en) Product recommendation method, apparatus, and device, and computer readable storage medium
WO2018076811A1 (en) Data sharing method, device, system, storage medium and electronic device
WO2017166037A1 (en) Data tampering detection device and method
WO2018121026A1 (en) Method and system for configuring set-top box
WO2015169177A1 (en) Web page display method and apparatus
WO2016090991A1 (en) Method and apparatus for downloading streaming media data
WO2019052164A1 (en) Follow-up video call method, device, apparatus, and storage medium
WO2017156683A1 (en) Linked list-based application cache management method and device
WO2017190451A1 (en) Picture pushing method and apparatus

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2016277745

Country of ref document: AU

Date of ref document: 20160314

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16893848

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16893848

Country of ref document: EP

Kind code of ref document: A1