AU2016277745B2 - Linked-list-based method and device for application caching management - Google Patents
Linked-list-based method and device for application caching management Download PDFInfo
- Publication number
- AU2016277745B2 AU2016277745B2 AU2016277745A AU2016277745A AU2016277745B2 AU 2016277745 B2 AU2016277745 B2 AU 2016277745B2 AU 2016277745 A AU2016277745 A AU 2016277745A AU 2016277745 A AU2016277745 A AU 2016277745A AU 2016277745 B2 AU2016277745 B2 AU 2016277745B2
- Authority
- AU
- Australia
- Prior art keywords
- node
- memory size
- linked list
- cached data
- survival duration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000004083 survival effect Effects 0.000 claims description 36
- 238000012217 deletion Methods 0.000 claims description 12
- 230000037430 deletion Effects 0.000 claims description 12
- 238000007726 management method Methods 0.000 abstract description 25
- 238000013500 data storage Methods 0.000 abstract description 15
- 230000008569 process Effects 0.000 abstract description 15
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
A linked-list-based method for application caching management is disclosed,
the method including: when receiving application cached data, creating a node in a
linked list for the cached data and obtaining a memory size of the cached data;
5 obtaining a maximum memory size and a currently occupied memory size of the
linked list; adding the memory size of the received cached data to the currently
occupied memory size of the linked list to obtain a first memory size; and adding the
node to the linked list if the first memory size is smaller than or equal to the
maximum memory size. A linked-list-based device for application caching
10 management is also provided. Thus, the received cached data can be added to the
linked list if the first memory size is not greater than the maximum memory size,
thus avoiding that in the new cached data storage process the amount of cached data
may overrun the maximum memory size of the linked list and so improving the
cached data storage speed and efficiency.
22
Description
[0001] Embodiments of the present disclosure relate generally to storage
technology, and more particularly relate to a linked-list-based method and device for
application caching management.
[0002] Mobile terminals now have access to increasingly abundant applications
along with the rapid development of mobile communication technology. In the
application development process of the iOS, the cache is used for storage of a variety
of application data to serve the purpose of high-speed data access.
[0003] Currently, iOS platform employing the NSCache (a class the iOS introduces
for convenient caching of some objects) memory cache mechanism, however, there
is no strict limit on the memory size of the cached data. That is, only after new
cached data is stored such that the total amount of the currently stored cached data
exceeds the memory limit, the iOS platform will delete the last stored cached data
that overflows from the limit. With such storage strategy, the total memory size of
the cached data may be likely to overrun the system memory limit after new cached
data is stored, resulting in low storage speed and efficiency.
[0004] A linked-list-based method and device for application caching management
are provided, aiming at solving the prior art issue that on the iOS platform
employing NSCache the total amount of cached data may overrun the system
memory limit in the new cached data storage process thus resulting in low storage
speed and efficiency.
[0005] There is provided a linked-list-based method for application caching management, the method including:
[0006] creating a node in a linked list for received application cached data and
obtaining a memory size of the received application cached data;
[0007] obtaining a maximum memory size of the linked list and a currently
occupied memory size of the linked list;
[0008] adding the memory size of the received cached data to the currently
occupied memory size of the linked list to derive a first memory size; and
[0009] adding the node to the linked list if the first memory size is smaller than or
equal to the maximum memory size.
[0010] The method may further include, after adding the memory size of the
received cached data to the currently occupied memory size of the linked list:
[0011] if the first memory size is larger than the maximum memory size, obtaining
a time interval between last access time of each node in the linked list and present
time;
[0012] deleting the node having the largest time interval from the linked list;
[0013] adding the memory size of the received cached data to the currently
occupied memory size of the linked list after the node is deleted to derive a second
memory size; and
[0014] adding the node to the linked list if the second memory size is smaller than
or equal to the maximum memory size.
[0015] The method may further include, after adding the node to the linked list if
the first memory size is smaller than or equal to the maximum memory size:
[0016] when receiving a cached-data access request, obtaining identification
information of the cached data associated with the access request; and
[0017] traversing the linked list based on the identification information to access
the cached data associated with the access request.
[0018] The method may further include, after traversing the linked list based on the
identification information:
[0019] setting the corresponding node of the accessed cached data as the head node
of the linked list.
[0020] The data field of the node may contain a survival duration of the node, and
the method may further include, after adding the node to the linked list if the first
memory size is smaller than or equal to the maximum memory size:
[0021] determining based on the time of creation of the node whether the node
currently reaches the survival duration; and
[0022] deleting the node if the node has reached the survival duration.
[0023] There is provided a linked-list-based device for application caching
management, the device including:
[0024] a creation module configured to create a node in a linked list for received
application cached data and obtaining a memory size of the received application
cached data;
[0025] a first acquisition module configured to obtain a maximum memory size of
the linked list and a currently occupied memory size of the linked list;
[0026] a first computation module configured to add the memory size of the
received cached data to the currently occupied memory size of the linked list to
derive a first memory size; and
[0027] a first addition module configured to add the node to the linked list if the
first memory size is smaller than or equal to the maximum memory size.
[0028] The device may further include:
[0029] a second acquisition module configured to obtain a time interval between
last access time of each node in the linked list and present time, if the first memory
size is larger than the maximum memory size;
[0030] a first deletion module configured to delete the node having the largest time
interval from the linked list;
[0031] a second computation module configured to add the memory size of the
received cached data to the currently occupied memory size of the linked list after
the node is deleted to derive a second memory size; and
[0032] a second addition module configured to add the node to the linked list if the
second memory size is smaller than or equal to the maximum memory size.
[0033] The device may further include:
[0034] a third acquisition module configured to obtain, when receiving a cached
data access request, identification information of the cached data associated with the
access request; and
[0035] a traversal module configured to traverse the linked list based on the
identification information to access the cached data associated with the access
request.
[0036] The device may further include a setting module configured to set the
corresponding node of the accessed cached data as the head node of the linked list.
[0037] The data field of the node may contain a survival duration of the node, and
the device may further include:
[0038] a determination module configured to determine whether the node currently
reaches the survival duration based on the time of creation of the node; and
[0039] a second deletion module configured to delete the node if the node has
reached the survival duration.
[0040] To summarize, according to the present disclosure upon receiving of
application cached data a node would be created in a linked list for the cached data
and a memory size of the cached data would be obtained. Then the maximum
memory size and the currently occupied memory size of the linked list may be
obtained, and the memory size of the received cached data may be added to the
currently occupied memory size of the linked list to obtain a first memory size.
Finally, if the first memory size is smaller than or equal to the maximum memory
size, the node would be added to the linked list. The received cached data therefore
can be added to the linked list if the first memory size is not greater than the
maximum memory size, thus avoiding that in the new cached data storage process
the cached data may overrun the maximum memory size of the linked list and so
improving the cached data storage speed and efficiency.
[0041] FIG. 1 depicts a flowchart illustrating a first embodiment of a
linked-list-based method for application caching management according to the disclosure.
[0042] FIG. 2 depicts a flowchart illustrating a second embodiment of the
linked-list-based method for application caching management according to the
disclosure.
[0043] FIG. 3 depicts a flowchart illustrating a third embodiment of the
linked-list-based method for application caching management according to the
disclosure.
[0044] FIG. 4 depicts a flowchart illustrating a fourth embodiment of the
linked-list-based method for application caching management according to the
disclosure.
[0045] FIG. 5 is a block diagram illustrating a first embodiment of a
linked-list-based device for application caching management according to the
disclosure.
[0046] FIG. 6 is a block diagram illustrating a second embodiment of a
linked-list-based device for application caching management according to the
disclosure.
[0047] FIG. 7 is a block diagram illustrating a third embodiment of a
linked-list-based device for application caching management according to the
disclosure.
[0048] FIG. 8 is a block diagram illustrating a fourth embodiment of a
linked-list-based device for application caching management according to the
disclosure.
[0049] The objects, features and advantages of the present disclosure will be
obvious from the description rendered in further detail with reference to the
accompanying drawings.
[0050] It is to be appreciated that the specific embodiments described herein are for
illustration purposes only and are not intended to be limiting the scope of the present
disclosure.
[0051] The present invention provides a linked-list-based method for application
caching management. Referring to FIG. 1, a flowchart is depicted illustrating a
first embodiment of a linked-list-based method for application caching management
according to the disclosure. The method according to this embodiment may
include the following blocks.
[0052] In S110, when cached data is received from an application, a node may be
created in a linked list accordingly, and a memory size of the received cached data
may be obtained.
[0053] The linked list can be a doubly linked list or a singly linked list. The
linked list node can be created by storing the cached data to the data field of the
newly created node, where the immediate successor of the newly created node is the
head node of the current linked list. At the same time, the memory size of the
received cached data can be obtained, and can be, e.g., 50M, etc. The method
according to the present embodiment can be applied, for example, to an iOS platform
such as one that is used to develop a variety of applications for mobile terminals.
The method may then proceed to block S120.
[0054] In S120, a maximum memory size of the linked list and a currently
occupied memory size of the linked list may be obtained.
[0055] The maximum memory size of the linked list is the maximum capacity of
the linked list set at the time of creation, while the currently occupied memory size
of the linked list refers to the memory size of the cached data currently stored
altogether in the linked list. The method may then continue to block S130.
[0056] In S130, the memory size of the received cached data may be added to the
currently occupied memory size of the linked list to derive a first memory size.
The method then may proceed to block S140.
[0057] In S140, the linked list node may be added to the linked list if the first
memory size is smaller than or equal to the maximum memory size.
[0058] If the first memory size is smaller than or equal to the maximum memory
size, it may indicate the currently stored cached data has not yet exceeded the
maximum capacity of the linked list, so that the node can be added to the linked list, or more specifically, the linked list node can be added to the linked list as the head node of the linked list.
[0059] According to the present embodiment, when receiving application cached
data a node may be created in a linked list for the cached data and a memory size of
the cached data may be obtained. Then the maximum memory size and the
currently occupied memory size of the linked list may be obtained, and the memory
size of the received cached data may be added to the currently occupied memory size
of the linked list to obtain a first memory size. And finally if the first memory size
is smaller than or equal to the maximum memory size, the node would be added to
the linked list. The received cached data therefore can be added to the linked list if
the first memory size is not greater than the maximum memory size, thus avoiding
that in the new cached data storage process the amount of cached data may overrun
the maximum memory size of the linked list and so improving the cached data
storage speed and efficiency.
[0060] Referring now to FIG. 2, a flowchart is depicted illustrating a second
embodiment of the linked-list-based method for application caching management.
The second embodiment will be described below on the basis of the first
embodiment above and may further include the following blocks after S130 of the
first embodiment.
[0061] In S150, if the first memory size is larger than the maximum memory size, a
time interval between last access time of each node in the linked list and the present
time may be obtained.
[0062] The last access time of each node refers to the time when the node is being
accessed the last time. The method then may proceed to block S160.
[0063] In S160, the node having the largest time interval may be deleted from the
linked list.
[0064] The node having the largest time interval, i.e., the node that has the longest
period of time from the time it is lastly accessed to the present time, or in other
words, the node that has not been visited for the longest duration, would be deleted
from the linked list. The method may then continue to block S170 and S180.
[0065] In S170, the memory size of the received cached data may be added to the
currently occupied memory size of the linked list after the node is deleted to derive a
second memory size.
[0066] In S180, the node may be added to the linked list if the second memory size
is smaller than or equal to the maximum memory size.
[0067] After the node with the largest time interval is deleted from the linked list, if
the second memory size is smaller than or equal to the maximum memory size, i.e.,
the sum of the currently occupied memory size of the linked list and the memory size
of the received cached data is smaller than the maximum memory size of the linked
list, then the node may be added to the linked list, and specifically, the node may be
added to the linked list as the head node of the linked list.
[0068] According to the present embodiment, if the first memory size is larger than
the maximum memory size, the time interval between the last access time of each
node in the linked list and the present time may be obtained, and the node with the
largest time interval may be deleted from the linked list. Then the memory size of
the received cached data may be added to the currently occupied memory size of the
linked list after the node is deleted to derive a second memory size. So if the
second memory size is smaller than or equal to the maximum memory size, the node
would be added to the linked list. Therefore, if the first memory size is larger than
the maximum memory size, the node having the largest time interval or, in other
words, having not been visited for the longest period of time will be deleted from the
linked list in order that the second memory size would be smaller than or equal to the
maximum memory size, and so the node will be able to add to the linked list,
avoiding that in the new cached data storage process the cached data may overrun
the maximum memory size of the linked list, thus improving the cached data storage
speed and efficiency.
[0069] Referring now to FIG. 3, a flowchart is depicted illustrating a third
embodiment of the linked-list-based method for application caching management.
The third embodiment will be described below on the basis of the first embodiment
above and may further include the following blocks after S40 of the first embodiment.
[0070] In S190, when a cached-data access request is received, the identification
information of the cached data associated with the access request may be obtained.
[0071] In the process of developing various applications for mobile terminals on
the iOS platform, an access request to the cached data can be generated when the
cached data in the linked list needs to be called, where the access request may carry
the identification information of the requested cached data, so when receiving the
access request, the access request can be parsed to derive the identification
information of the cached data corresponding to the access request. The method
may then proceed to block S200.
[0072] In S200, the linked list may be traversed based on the identification
information to access the cached data corresponding to the access request.
[0073] After obtaining the identification information of the cached data, the linked
list may be traversed based on the identification information to access the cached
data corresponding to the access request. However, the current iOS platform
employing NSCache needs to compare the identification information with the cached
data keys one by one for a match, so if a legion of cached data keys are similar to
each other, or in other words, if there are a large number of similar keys, then the
system may take a relatively large amount of time on comparison and matching,
leading to low cached data read performance, i.e., low cached data read speed and
efficiency.
[0074] In other embodiments, the method may further include, after block S200:
setting the corresponding node of the accessed cached data as the head node of the
linked list.
[0075] Therefore, the corresponding node of the retrieved cached data may be set
as the head node of the linked list, so that if the first memory size is greater than the
maximum memory size when new cached data is subsequently received, the tail
node would be deleted. Then the memory size of the received cached data may be
added to the currently occupied memory size of the linked list to derive the second
memory size. If, finally, the second memory size is smaller than or equal to the maximum memory size, the node would be added to the linked list. Hence the cached data storage process can be reduced when the first memory size is greater than the maximum memory size, resulting in enhanced cached data storage speed and efficiency.
[0076] According to the present embodiment, when a cached-data access request is
received, the identification information of the cached data corresponding to the
access request may be obtained. The linked list can then be traversed based on the
identification information to retrieve the cached data corresponding to the access
request. Hence the traversal of the linked list based on the identification
information of the cached data can allow the retrieval of the cached data, improving
the cached data read speed and efficiency.
[0077] Referring now to FIG. 4, a flowchart is depicted illustrating a fourth
embodiment of the linked-list-based method for application caching management,
which will be described below on the basis of the first embodiment above and which
may further include the following blocks after S40 of the first embodiment.
[0078] In S210, the method may include determining whether the linked list node
currently reaches a survival duration based on the time of creation of the node.
[0079] When storing the cached data, the data field of the node created may contain
the survival duration of the node, and a timer may be started right after the linked list
node is added to the linked list in order to determine subsequently whether the node
reaches the survival duration. The method may then proceed to block S220.
[0080] In S220, if the node has reached the survival duration, then the node may be
deleted from the linked list.
[0081] According to the present embodiment, the time of creation of the linked list
node may be taken as a reference time point to determine whether the node has
reached the survival duration, and the node would be deleted when it has reached the
survival duration. Hence the nodes in the linked list can be deleted on a
survival-duration basis, and therefore the cached data can be stored for only a certain
period of time which further improves the cached data access efficiency.
[0082] The present invention further provides a linked-list-based device for application caching management. Referring now to FIG. 5, a block diagram is depicted illustrating a first embodiment of a linked-list-based device for application caching management according to the disclosure. The device according to this embodiment may include a creation module 110, a first acquisition module 120, a first computation module 130, and a first addition module 140.
[0083] Creation module 110 may be configured to create a node in a linked list for
received application cached data and obtain a memory size of the received
application cached data.
[0084] The linked list can be a doubly linked list or a singly linked list. The
linked list node can be created by storing the cached data to the data field of the
newly created node, where the immediate successor of the newly created node is the
head node of the current linked list. At the same time, the memory size of the
received cached data can be obtained, and can be, e.g., 50M, etc. The device
according to the present embodiment can be applied, for example, to an iOS platform
such as one that is used to develop various applications for mobile terminals.
[0085] The first acquisition module 120 may be configured to obtain a maximum
memory size of the linked list and a currently occupied memory size of the linked
list.
[0086] The maximum memory size of the linked list is the maximum capacity of
the linked list set at the time of creation, while the currently occupied memory size
of the linked list refers to the memory size of the cached data currently stored
altogether in the linked list.
[0087] The first computation module 130 may be configured to add the memory
size of the received cached data to the currently occupied memory size of the linked
list to derive a first memory size.
[0088] The first addition module 140 may be configured to add the node to the
linked list if the first memory size is smaller than or equal to the maximum memory
size.
[0089] If the first memory size is smaller than or equal to the maximum memory
size, it may indicate the currently stored cached data has not yet exceeded the maximum capacity of the linked list, so that the first addition module 140 may add the node to the linked list, or more specifically, set the linked list node as the head node of the linked list.
[0090] According to the present embodiment, when receiving application cached
data the creation module 110 may create a node in a linked list for the cached data
and obtain a memory size of the cached data. Then the first acquisition module 120
may obtain the maximum memory size and the currently occupied memory size of
the linked list, and the first computation module 130 may add the memory size of the
received cached data to the currently occupied memory size of the linked list to
obtain the first memory size. And finally if the first memory size is smaller than or
equal to the maximum memory size, the first addition module 140 may add the node
to the linked list. The received cached data therefore can be added to the linked list
if the first memory size is not greater than the maximum memory size, thus avoiding
that in the new cached data storage process the amount of cached data may overrun
the maximum memory size of the linked list and so improving the cached data
storage speed and efficiency.
[0091] Referring now to FIG. 6, a block diagram is depicted illustrating a second
embodiment of the linked-list-based device for application caching management
according to the disclosure. The second embodiment device will be described
below on the basis of the first embodiment device illustrated above and may further
include a second acquisition module 150, a first deletion module 160, a second
computation module 170, and a second addition module 180.
[0092] The second acquisition module 150 may be configured to obtain a time
interval between last access time of each node in the linked list and present time, if
the first memory size is larger than the maximum memory size.
[0093] The last access time of each node in the linked list refers to the time when
the node is being accessed the last time.
[0094] The first deletion module 160 may be configured to delete the node having
the largest time interval from the linked list.
[0095] The node having the largest time interval, referring to the node that has the longest period of time from the time it is lastly accessed to the present time, i.e., the node that has not been visited for the longest duration, would be deleted from the linked list.
[0096] The second computation module 170 may be configured to add the memory
size of the received cached data to the currently occupied memory size of the linked
list after the node is deleted to derive a second memory size.
[0097] The second addition module 180 may be configured to add the node to the
linked list if the second memory size is smaller than or equal to the maximum
memory size.
[0098] After the node with the largest time interval is deleted from the linked list, if
the second memory size is smaller than or equal to the maximum memory size, i.e.,
the sum of the currently occupied memory size of the linked list and the memory size
of the received cached data is smaller than the maximum memory size of the linked
list, then the second addition module 180 may add the node to the linked list, and
specifically, add and set the node as the head node of the linked list.
[0099] According to the present embodiment, if the first memory size is larger than
the maximum memory size, the second acquisition module 150 may obtain the time
interval between the last access time of each node in the linked list and the present
time, and the first deletion module 160 may delete the node with the largest time
interval from the linked list. Then the second computation module 170 may add
the memory size of the received cached data to the currently occupied memory size
of the linked list after the node is deleted to derive the second memory size. So if
the second memory size is smaller than or equal to the maximum memory size, the
second addition module 180 may add the linked list node to the linked list.
Therefore, if the first memory size is larger than the maximum memory size, the
node having the largest time interval or, in other words, having not been accessed for
the longest period of time will be deleted from the linked list in order that the second
memory size would be smaller than or equal to the maximum memory size, and so
the linked list node will be able to add to the linked list, avoiding that in the new
cached data storage process the cached data may overrun the maximum memory size of the linked list, thus improving the cached data storage speed and efficiency.
[0100] Referring now to FIG. 7, a block diagram is depicted illustrating a third
embodiment of the linked-list-based device for application caching management.
The third embodiment device will be described below on the basis of the first
embodiment device above and may further include a third acquisition module 190,
and a traversal module 200.
[0101] The third acquisition module 190 may be configured to obtain, when
receiving a cached-data access request, identification information of the cached data
corresponding to the access request.
[0102] In the process of developing various applications for a mobile terminal on
the iOS platform, an access request to the cached data can be generated when the
cached data in the linked list needs to be called, where the access request may carry
the identification information of the requested cached data, so when the cached-data
access request is received, the third acquisition module 190 may parse the access
request to derive the identification information of the cached data corresponding to
the access request.
[0103] The traversal module 200 may be configured to traverse the linked list
based on the identification information to retrieve the cached data corresponding to
the access request.
[0104] After obtaining the identification information of the cached data, the
linked list may be traversed based on the identification information to access the
cached data associated with the access request. However, the current iOS platform
employing NSCache needs to compare the identification information with the cached
data keys on one-by-one basis for a match, so if a legion of cached data keys are
similar to each other, or in other words, if there are a large number of similar keys,
then the system may take a relatively large amount of time on comparison and
matching, leading to low cached data read performance, i.e., low cached data read
speed and efficiency.
[0105] In other embodiments, the device may further include a setting module
configured to set the corresponding node of the accessed cached data as the head node of the linked list.
[0106] Therefore, the setting module can set the corresponding node of the
retrieved cached data as the head node of the linked list, so that if the first memory
size is greater than the maximum memory size when new cached data is
subsequently received, the tail node would be deleted. Then the memory size of the
received cached data may be added to the currently occupied memory size of the
linked list to derive the second memory size. If, finally, the second memory size is
smaller than or equal to the maximum memory size, the linked list node will be
added to the linked list. Hence the cached data storage process can be reduced
when the first memory size is greater than the maximum memory size, resulting in
enhanced cached data storage speed and efficiency.
[0107] According to the present embodiment, when a cached-data access request
is received, the third acquisition module 190 may obtain the identification
information of the cached data corresponding to the access request. Then the
traversal module 200 may traverse the linked list based on the identification
information to retrieve the cached data associated with the access request. Hence
the traversal of the linked list based on the identification information of the cached
data can allow the retrieval of the cached data, thus improving the cached data read
speed and efficiency.
[0108] Referring now to FIG. 8, a block is depicted illustrating a fourth
embodiment of the linked-list-based device for application caching management,
which will be described below on the basis of the first embodiment device
previously illustrated and which may further include a determination module 210
and a second deletion module 220.
[0109] The determination module 210 may be configured to determine whether
the linked list node currently reaches the survival duration based on the time of
creation of the node.
[0110] When storing the cached data, the data field of the linked list node
created may contain the survival duration of the node, and a timer may be started
right after the linked list node is added to the linked list for subsequent determination as to whether the node reaches the survival duration.
[0111] The second deletion module 220 may be configured to delete the node if
the node has reached its survival duration.
[0112] According to the present embodiment, the determination module 210 may
base on the time of creation of the linked list node to determine whether the node
currently has reached the survival duration, and the second deletion module 220 may
delete the node when it has reached its survival duration. Hence the nodes in the
linked list can be deleted on a survival-duration basis, and therefore the cached data
can be stored for only a certain period of time which further improves the cached
data access efficiency.
[0113] It is to be noted that the term "including", "comprising", or any other
variation thereof is intended to encompass a non-exclusive inclusion herein so that a
process, method, article, or device including/comprising a set of elements includes
not only the stated elements, but also other elements not expressly listed, or elements
inherent to such processes, methods, articles, or devices. In the absence of further
limitations, the element defined by the phrase "including/comprising one..." does not
preclude the presence of additional identical elements in the process, method, article,
or device that includes the element.
[0114] The embodiments of the present disclosure have been described for
purposes of illustration only and are not intended to be limiting the scope of the
disclosure.
[0115] It will be apparent to those skilled in the art from the foregoing
description that the above-described embodiments may be implemented by means of
software plus the necessary general-purpose hardware platform--although hardware
can be used--but the former would be advantageous in many cases. On the basis of
such an understanding, the substantial technical solution, or the part which
contributes to the prior art, or all or part of the technical solution, of the disclosure,
may be embodied as software products. Computer software products can be stored
in a storage medium, e.g., a ROM/RAM, magnetic disk, or optical disk, and can
include multiple instructions causing a computing device, e.g., a mobile phone, a computer, a server, a conditioner, a network device, etc., to execute all or part of the methods as described in various embodiments herein.
[0116] The foregoing specification merely depicts some exemplary embodiments of the present disclosure and therefore is not intended as limiting the scope of the disclosure. Any equivalent structural or flow transformations made to the disclosure, or any direct or indirect applications of the disclosure on any other related fields, shall all fall in the protection of the disclosure.
[0117] It will be understood that the term "comprise" and any of its derivatives (eg comprises, comprising) as used in this specification is to be taken to be inclusive of features to which it refers, and is not meant to exclude the presence of any additional features unless otherwise stated or implied.
[0118] The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement or any form of suggestion that such prior art forms part of the common general knowledge.
Claims (12)
1. A linked-list-based method for application caching management, the method comprising: creating a node in a linked list according to received application cached data and obtaining a memory size of the received application cached data; obtaining a maximum memory size and a currently occupied memory size of the linked list; adding the memory size of the received cached data and the currently occupied memory size of the linked list to obtain a first memory size; and adding the node to the linked list when the first memory size is smaller than or equal to the maximum memory size; obtaining identification information of cached data corresponding to a received cached data access request; and traversing the linked list to access the cached data corresponding to the access request based on the identification information.
2. The method according to claim 1, further comprising, after adding the memory size of the received cached data to the currently occupied memory size of the linked list: obtaining a time interval between last access time of each node in the linked list and present time, if the first memory size is larger than the maximum memory size; deleting the node having the largest time interval from the linked list; adding the memory size of the received cached data to the currently occupied memory size of the linked list after the node is deleted to derive a second memory size; and adding the created node to the linked list if the second memory size is smaller than or equal to the maximum memory size.
3. The method according to claim 1, further comprising, after traversing the linked list based on the identification information: setting the corresponding node of the accessed cached data as head node of the linked list.
4. The method according to claim 1, wherein data field of the node contains a survival duration of the node, and the method further comprises, after adding the node to the linked list: determining whether the node currently reaches the survival duration based on the time of creation of the node; and deleting the node if the node has reached the survival duration.
5. The method according to claim 2, wherein data field of the node contains a survival duration of the node, and the method further comprises, after adding the node to the linked list: determining whether the node currently reaches the survival duration based on the time of creation of the node; and deleting the node if the node has reached the survival duration.
6. The method according to claim 3, wherein data field of the node contains a survival duration of the node, and the method further comprises, after adding the node to the linked list: determining whether the node currently reaches the survival duration based on the time of creation of the node; and deleting the node if the node has reached the survival duration.
7. A linked-list-based device for application caching management, the device comprising: a creation module, configured to create a node in a linked list according to received application cached data and obtaining a memory size of the received application cached data; a first acquisition module, configured to obtain a maximum memory size and a currently occupied memory size of the linked list; a first computation module, configured to add the memory size of the received cached data and the currently occupied memory size of the linked list to obtain a first memory size; and a first addition module, configured to add the node to the linked list when the first memory size is smaller than or equal to the maximum memory size; a third acquisition module configured to obtain identification information of cached data corresponding to a received cached-data access request; and a traversal module configured to traverse the linked list to access the cached data corresponding to the access request based on the identification information.
8. The device according to claim 7, further comprising: a second acquisition module configured to obtain a time interval between last access time of each node in the linked list and present time, if the first memory size is larger than the maximum memory size; a first deletion module configured to delete the node having the largest time interval from the linked list; a second computation module configured to add the memory size of the received cached data to the currently occupied memory size of the linked list after the node is deleted to derive a second memory size; and a second addition module configured to add the created node to the linked list if the second memory size is smaller than or equal to the maximum memory size.
9. The device according to claim 7, further comprising: a setting module configured to set the corresponding node of the accessed cached data as head node of the linked list.
10. The device according to claim 7, wherein data field of the node contains a survival duration of the node, and the device further comprises: a determination module configured to determine whether the node currently reaches the survival duration based on the time of creation of the node; and a second deletion module configured to delete the node if the node has reached the survival duration.
11. The device according to claim 8, wherein data field of the node contains a survival duration of the node, and the device further comprises: a determination module configured to determine whether the node currently reaches the survival duration based on the time of creation of the node; and a second deletion module configured to delete the node if the node has reached the survival duration.
12. The device according to claim 9, wherein data field of the node contains a survival duration of the node, and the device further comprises: a determination module configured to determine whether the node currently reaches the survival duration based on the time of creation of the node; and a second deletion module configured to delete the node if the node has reached the survival duration.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/076296 WO2017156683A1 (en) | 2016-03-14 | 2016-03-14 | Linked list-based application cache management method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
AU2016277745A1 AU2016277745A1 (en) | 2017-09-28 |
AU2016277745B2 true AU2016277745B2 (en) | 2021-02-11 |
Family
ID=59851471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2016277745A Ceased AU2016277745B2 (en) | 2016-03-14 | 2016-03-14 | Linked-list-based method and device for application caching management |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2016277745B2 (en) |
WO (1) | WO2017156683A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10241927B2 (en) * | 2016-03-14 | 2019-03-26 | Shenzhen Skyworth-Rgb Electronic Co., Ltd. | Linked-list-based method and device for application caching management |
CN109144892A (en) * | 2018-08-27 | 2019-01-04 | 南京国电南自轨道交通工程有限公司 | A kind of buffering linked list data structure design method of managing internal memory medium-high frequency delta data |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850507A (en) * | 2014-02-18 | 2015-08-19 | 腾讯科技(深圳)有限公司 | Data caching method and data caching device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7536529B1 (en) * | 2005-06-10 | 2009-05-19 | American Megatrends, Inc. | Method, system, apparatus, and computer-readable medium for provisioning space in a data storage system |
CN103984639B (en) * | 2014-04-29 | 2016-11-16 | 宁波三星医疗电气股份有限公司 | A kind of dynamic memory distribution method |
CN105786723A (en) * | 2016-03-14 | 2016-07-20 | 深圳创维-Rgb电子有限公司 | Application cache management method and device based on linked list |
-
2016
- 2016-03-14 WO PCT/CN2016/076296 patent/WO2017156683A1/en active Application Filing
- 2016-03-14 AU AU2016277745A patent/AU2016277745B2/en not_active Ceased
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850507A (en) * | 2014-02-18 | 2015-08-19 | 腾讯科技(深圳)有限公司 | Data caching method and data caching device |
Also Published As
Publication number | Publication date |
---|---|
WO2017156683A1 (en) | 2017-09-21 |
AU2016277745A1 (en) | 2017-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105718455B (en) | A kind of data query method and device | |
CN109683811B (en) | Request processing method for hybrid memory key value pair storage system | |
CN109684333B (en) | Data storage and cutting method, equipment and storage medium | |
KR102564170B1 (en) | Method and device for storing data object, and computer readable storage medium having a computer program using the same | |
CN107015985B (en) | Data storage and acquisition method and device | |
CN111198856B (en) | File management method, device, computer equipment and storage medium | |
US11269956B2 (en) | Systems and methods of managing an index | |
CN103229164B (en) | Data access method and device | |
US10241927B2 (en) | Linked-list-based method and device for application caching management | |
CN110716924B (en) | Method and device for deleting expired data | |
CN112214468B (en) | Small file acceleration method, device, equipment and medium for distributed storage system | |
CN111506604A (en) | Method, apparatus and computer program product for accessing data | |
AU2016277745B2 (en) | Linked-list-based method and device for application caching management | |
CN109948056B (en) | Evaluation method and device of recommendation system | |
CN117390029B (en) | Table entry inserting method and device, electronic equipment and storage medium | |
CN111291083A (en) | Webpage source code data processing method and device and computer equipment | |
US11853229B2 (en) | Method and apparatus for updating cached information, device, and medium | |
KR20150045073A (en) | Data Operating Method And System supporting the same | |
CN105786723A (en) | Application cache management method and device based on linked list | |
TWI710918B (en) | An optimization method, device and computer equipment of LSM tree | |
CN106549983B (en) | Database access method, terminal and server | |
CN111190861A (en) | Hot file management method, server and computer readable storage medium | |
CN106970964B (en) | GPS data information query method and system based on shared memory | |
CN115733795A (en) | Message forwarding method, network forwarding equipment and computer storage medium | |
US20150095284A1 (en) | Information storage system, information storage method, and computer-readable medium storing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGA | Letters patent sealed or granted (standard patent) | ||
MK14 | Patent ceased section 143(a) (annual fees not paid) or expired |