CN117130792A - Processing method, device, equipment and storage medium for cache object - Google Patents

Processing method, device, equipment and storage medium for cache object Download PDF

Info

Publication number
CN117130792A
CN117130792A CN202311397972.4A CN202311397972A CN117130792A CN 117130792 A CN117130792 A CN 117130792A CN 202311397972 A CN202311397972 A CN 202311397972A CN 117130792 A CN117130792 A CN 117130792A
Authority
CN
China
Prior art keywords
memory
cached
reference data
cache
occupied memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311397972.4A
Other languages
Chinese (zh)
Other versions
CN117130792B (en
Inventor
苏志林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311397972.4A priority Critical patent/CN117130792B/en
Publication of CN117130792A publication Critical patent/CN117130792A/en
Application granted granted Critical
Publication of CN117130792B publication Critical patent/CN117130792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a processing method, a device, equipment and a storage medium for a cache object, and relates to the technical field of computers. The method comprises the following steps: acquiring occupied memory of an object to be cached, wherein the occupied memory is used for indicating the memory required to be occupied by the object to be cached; caching the object to be cached into the cache container under the condition that the occupied memory is smaller than or equal to the memory threshold value corresponding to the cache container, and acquiring the total occupied memory of the cache container; releasing the reference data positioned at the tail part of the access sequence list corresponding to the cache container under the condition that the total occupied memory is larger than the memory threshold value, and acquiring the updated total occupied memory of the cache container, wherein the access sequence list is used for recording the access sequence of the cached object based on the reference data; and stopping releasing the reference data under the condition that the total occupied memory after updating is smaller than or equal to the memory threshold value. According to the embodiment of the application, the cache objects in the cache are eliminated according to the occupied memory of the cache objects, so that the total occupied memory of the cache can be controlled.

Description

Processing method, device, equipment and storage medium for cache object
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a processing method, a device, equipment and a storage medium for a cache object.
Background
The cache elimination policy refers to a policy that when the number of cache objects (such as data, resources, etc.) in a cache reaches a certain threshold, part of the cache objects are deleted according to a certain rule to keep the capacity and performance of the cache, which plays an important role in cache management.
Taking a CDN (Content Delivery Network) map (an image stored in the CDN) cache as an example, the related art uses a cache elimination policy of loading first and eliminating first to perform cache management. For example, the related art uses a maximum cacheable 10-map list as a cache container, if a newly loaded CDN map does not exist in the cache (i.e., the cache container), the new CDN map is loaded into the cache, and if the map in the cache exceeds 10 maps, the 1-map loaded earliest in the cache is eliminated.
However, the memory occupied by the graphs with different resolutions is extremely different, and limiting the capacity of the cache by the number of graphs can cause large fluctuation of the total occupied memory of the cache.
Disclosure of Invention
The embodiment of the application provides a processing method, a device, equipment and a storage medium for a cache object, which can realize the controllability of the total occupied memory of the cache, thereby improving the stability of the total occupied memory of the cache. The technical scheme comprises the following contents.
According to an aspect of an embodiment of the present application, there is provided a method for processing a cache object, the method including:
acquiring occupied memory of an object to be cached, wherein the occupied memory is used for indicating the memory required to be occupied by the object to be cached;
caching the object to be cached to the cache container under the condition that the occupied memory is smaller than or equal to a memory threshold value corresponding to the cache container, and acquiring the total occupied memory of the cache container; the total occupied memory refers to the sum value of occupied memory of cached objects which are not released with reference data in the cache container, wherein the reference data is used for referencing the cached objects;
releasing the reference data positioned at the tail part of the access sequence list corresponding to the cache container under the condition that the total occupied memory is larger than the memory threshold value, and acquiring the updated total occupied memory of the cache container, wherein the access sequence list is used for recording the access sequence of the cached object based on the reference data;
And stopping releasing the reference data under the condition that the updated total occupied memory is smaller than or equal to the memory threshold value.
According to an aspect of an embodiment of the present application, there is provided a processing apparatus for caching an object, the apparatus including:
the occupied memory acquisition module is used for acquiring occupied memory of an object to be cached, wherein the occupied memory is used for indicating the memory required to be occupied by the object to be cached;
the total memory acquisition module is used for caching the object to be cached to the cache container under the condition that the occupied memory is smaller than or equal to a memory threshold value corresponding to the cache container, so as to acquire the total occupied memory of the cache container; the total occupied memory refers to the sum value of occupied memory of cached objects which are not released with reference data in the cache container, wherein the reference data is used for referencing the cached objects;
the reference data release module is used for releasing the reference data positioned at the tail part of the access sequence list corresponding to the cache container under the condition that the total occupied memory is larger than the memory threshold value, acquiring the updated total occupied memory of the cache container, wherein the access sequence list is used for recording the access sequence of the cached object based on the reference data;
And the reference data release module is further configured to stop releasing the reference data when the updated total occupied memory is less than or equal to the memory threshold.
According to an aspect of an embodiment of the present application, there is provided a computer apparatus including a processor and a memory, in which a computer program is stored, the computer program being loaded and executed by the processor to implement the above-mentioned processing method of a cache object.
According to an aspect of an embodiment of the present application, there is provided a computer readable storage medium having stored therein a computer program loaded and executed by a processor to implement the above-described processing method of a cache object.
According to an aspect of an embodiment of the present application, there is provided a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program so that the computer device executes the processing method of the cache object described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects.
Under the condition that the total occupied memory of the cache container exceeds the memory limit, the reference data of the cached objects are released, so that the total occupied memory of the cache container is reduced, the cache container is limited by taking the occupied memory as a unit, the problem that the occupied memory is uncontrollable when the cache container is limited by the number of the cache objects is solved, and the total occupied memory of the cache container can be accurately controlled by taking the occupied memory as a unit, so that the stability of the total occupied memory of the cache container is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment for an embodiment of the present application;
FIG. 2 is a flowchart of a method for processing a cache object according to an embodiment of the present application;
FIG. 3 is a schematic illustration of a game play interface provided by one embodiment of the present application;
FIG. 4 is a schematic diagram of an access order list provided by one embodiment of the application;
FIGS. 5-9 are schematic diagrams of access order list changes provided by one embodiment of the present application;
FIG. 10 is a schematic diagram of released reference data being added to a weak reference pool provided by one embodiment of the application;
FIG. 11 is a schematic diagram of reference data being restored to an access order list from a weak reference pool provided by one embodiment of the application;
FIG. 12 is a flowchart of a method for processing a cache object according to another embodiment of the present application;
FIG. 13 is a flowchart of a method for processing a cache object according to still another embodiment of the present application;
FIG. 14 is a diagram illustrating a change in total occupied memory of a cache container according to an embodiment of the present application;
FIG. 15 is a schematic diagram illustrating a change in total occupied memory of a cache container according to an embodiment of the present application;
FIG. 16 is a block diagram of a processing device for caching objects according to one embodiment of the application;
FIG. 17 is a block diagram of a processing device for caching objects according to another embodiment of the application;
FIG. 18 is a block diagram of a computer device provided in one embodiment of the application.
Detailed Description
Before describing embodiments of the present application, related terms referred to in the present application will be first described.
1. CDN (Content Delivery Network ): which is a service technique for accelerating the transmission of network content. In a game development scenario, CDNs are often used to store resource content that users need to download in real-time.
2. Resources within the engine: it refers to resources imported into the engine engineering during development (e.g., game development), which can be built by the engine and loaded through the engine's resource management framework when run.
3. CDN diagram: the image stored in the CDN is opposed to the in-engine resources mentioned above, i.e., out-of-engine resources. The CDN map is downloaded and loaded in real time, is not imported into engine engineering during development, has a substantial difference with resources in the engine, and cannot be managed by a resource management framework of the engine. For example, some pictures within a game may desirably support frequent real-time updates, and without delivering a hot-patch to the game client, a CDN map scheme may be used, i.e., the images are stored in the CDN.
4. GC (Garbage Collection ) mechanism: a memory hosting strategy, a developer can give a garbage collection mechanism to find unreachable objects instead of self-managing and releasing memory which is no longer needed, and perform collection on the unreachable objects, and release the memory of the unreachable objects.
5. The method can achieve the following steps: in the garbage collection mechanism, all objects are traversed from the "root" of the memory, if an object can be traversed from the "root", the object is an object still holding references in the code, namely a reachable object, the objects are not collected in the garbage collection, the rest of the objects are objects without direct references (except weak references) in the code, and the unreachable object releases the memory in the next garbage collection. The objects may refer to data, a data heap, a data set, a resource set, and the like, which is not limited by the embodiment of the present application.
6. LRU (Least Recently Used ) elimination policy: the method takes the core idea that the probability of future access is higher if the cache is accessed recently, and when the memory occupied by the cache exceeds a threshold value, the least recently used content in the cache is preferentially eliminated.
7. Weak reference: weak references refer to references that cannot ensure that objects to which they refer are not reclaimed by the garbage recycler. An object may be reclaimed at any time if it is only referenced by a weak reference. The nature of the weak references may be such that their accessibility is still preserved without affecting the reclamation of one of the objects to be reclaimed.
8. Weak reference pool: a weak reference list container stores objects which are held in a weak reference manner.
9. Reference (Reference): which is a pointer to an object that points to an object in the cache that can be used to manipulate the object. Such as when an object is created and assigned to a variable, the variable becomes a reference to the object. The references correspond to two types: 1) Basic type variable: the assignment operation is directly modified into a new value, and the original data is covered; 2) Reference type variable: the assignment operation only changes the address of the object stored in the variable, and the original object is recovered through a GC mechanism.
10. Cache (Cache): which is a hardware or software component embedded in the application or device memory, may automatically temporarily store data used by the user. The cache may be a high-speed memory, for example, which may be used for high-speed data exchange. The cache (noun) in embodiments of the application may also be referred to as a cache container.
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an implementation environment of an embodiment of the present application is shown. The implementation environment may include: terminal device 10, server 20 and server 30.
The terminal device 10 may be an electronic device such as a smart phone, tablet computer, notebook computer, desktop computer, smart speaker, smart watch, multimedia player device, PC (Personal Computer ), smart robot, vehicle-mounted terminal, wearable device, etc. The terminal device 10 may be provided with a client of a target application, where the target application may be, for example, a game application, a browser application, a social entertainment application, a simulation learning application, a shopping application, a video playing application, etc., which is not limited in this embodiment of the present application. Optionally, a buffer container is deployed in the terminal device 10 to support data buffering.
The server 20 is used to provide background services for clients of target applications (e.g., game-like applications) in the terminal device 10. For example, the server 20 may be a background server of the application (e.g., game-like application) described above. The server 20 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center.
The server 30 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. Alternatively, the server 30 may be configured to store the resource content that needs to be downloaded in real time when the target application is used, such as an off-engine resource corresponding to the game application.
The terminal device 10 and the server 20 can communicate with each other via a network, the terminal device 10 and the server 30 can communicate with each other via a network, and the server 20 and the server 30 can communicate with each other via a network. The network may be a wired network or a wireless network.
The technical scheme provided by the embodiment of the application can be applied to any scene needing cache management (cache elimination strategy), such as the scenes of cache management of application programs, cache management of databases and the like. The technical scheme provided by the embodiment of the application can improve the stability of the total occupied memory of the cache (namely the cache container).
Illustratively, a terminal device on which a client running a game-like application is installed is taken as an example. When the client runs, if a specific CDN resource (i.e., an off-engine resource, such as the CDN map described above) is accessed for the first time, the CDN resource is downloaded from a corresponding CDN node (i.e., the server 30) through the terminal device, then the CDN resource is saved to a local hard disk of the terminal device, and the client loads the CDN resource from the local hard disk into a cache container corresponding to the engine for the engine to call. If the total occupied memory of the cache container is greater than the memory threshold, the terminal device eliminates the cached object in the cache container according to the occupied memory of the cached object so as to control the total occupied memory of the cache container.
The technical scheme provided by the application is described and illustrated by the method embodiment.
Referring to fig. 2, a flowchart of a method for processing a cache object according to an embodiment of the present application is shown, where the execution subject of each step of the method may be a terminal device 10, such as a client in the terminal device 10, in the implementation environment of the solution shown in fig. 1, and the method may include the following steps (201 to 204).
In step 201, an occupied memory of an object to be cached is obtained, where the occupied memory is used to indicate a memory that the object to be cached needs to occupy.
The above-mentioned object to be cached refers to an object to be temporarily stored into a cache container. The object may refer to data, such as data, a data heap, a data set, a resource set, and the like, such as corresponding data of an image, a video, a text, and the like, which is not limited in the embodiment of the present application. For an application, if an object is not included in an installation package corresponding to the application, the object may be determined to be an object to be cached after the object is downloaded, e.g., a CDN map not included in an in-engine resource for a game, which may be determined to be an object to be cached. For the database, the accessed object may be determined as an object to be cached, for example, the accessed object may be stored in a cache container corresponding to the database, and if the object is accessed again, the object may be returned directly from the cache container.
The source of the object to be cached in the embodiment of the application is not limited, and any object to be stored in the cache container can be called an object to be cached. Wherein the cache container is a hardware or software component embedded in the application or device memory that automatically temporarily stores data used by the user.
For example, referring to fig. 3, a schematic diagram of a game activity interface provided by an embodiment of the present application is shown, where the game activity interface 300 includes a plurality of activity tabs 301, each activity tab 301 may relate to at least one CDN map, and if a user triggers a certain activity tab 301, a client may download, from a CDN node, a CDN map corresponding to the activity tab 301 through a terminal device, where the downloaded CDN map is the object to be cached. The client may then store the CDN map in a local cache (e.g., a cache container corresponding to the game engine) for loadable use by the client, such as presentation to the user via the game activity interface 300. If there are more active tabs 301 triggered by the user, the terminal device needs to obtain multiple CDN maps (i.e., objects to be cached).
The occupied memory in the embodiment of the application can be used for indicating the storage space required by the object to be cached. In one example, the client evaluates the occupied memory of the object to be cached with a memory evaluation function, which may be set and adjusted according to an empirical value. Optionally, the memory evaluation function is related to the data size of the object to be cached. For an image, for example, a memory assessment function may be used to assess the occupied memory of the image based on its resolution. For example, for a 100×100 resolution image, its corresponding occupied memory may be 96KB, and for a 1920×1080 resolution image, its corresponding occupied memory may be 8100KB.
Step 202, caching an object to be cached in a cache container under the condition that the occupied memory is smaller than or equal to a memory threshold corresponding to the cache container, and obtaining the total occupied memory of the cache container; the total occupied memory refers to the sum value of occupied memory of cached objects in a cache container, from which reference data is not released, and the reference data is used for referencing the cached objects.
Reference data in embodiments of the present application may refer to values stored in data of a reference type (i.e., the above-described references (nouns)), such as a storage address of a cached object. Illustratively, the reference to the cached object may be a pointer to the starting storage address of the cached object after it has been stored, or may be a handle representing the cached object, which is not limited in this embodiment of the present application.
References to cached objects are used to reference the cached objects according to reference data, which references (verbs) may refer to the process of loading the cached objects from a cache container according to the reference data.
The release of the reference data in the embodiment of the application may be equivalent to release of the reference, that is, the cache container does not hold the reference any more, the occupied memory of the cached object after the reference is released does not belong to the cache container any more, and the occupied memory of the cached object after the reference is not released belongs to the cache container, so the total occupied memory of the cache container may refer to the sum value of the occupied memories of the cached objects corresponding to all the reference data held by the cache container. Optionally, after the object to be cached is cached in the cache container, the cache container holds the reference data of the object to be cached, where the object to be cached may be referred to as a cached object, and the total occupied memory of the cache container includes the occupied memory of the object to be cached.
The above-mentioned caching (verb) may refer to a process of temporarily storing, for example, caching an object to be cached in a cache container may refer to temporarily storing the object to be cached in the cache container.
The memory threshold is used for limiting the total occupied memory of the cache container, and the embodiment of the application does not limit the memory threshold, and can be set and adjusted according to actual use requirements (such as occupied memory of an application program and memory of computer equipment). For example, on a 2G memory computer device, the memory threshold may be set to 50000KB; on a computer device with 3G memory, the memory threshold may be set to 80000KB; on a 4G memory computer device, the memory threshold may be set to 100000KB.
In one example, embodiments of the present application may employ an LRU eviction policy to manage cached objects in a cache container.
Optionally, based on the LRU elimination policy, the client builds an access order list for the cache container, the node elements in the access order list being built based on the reference data of the cached objects, i.e. the access order list may be used to record the access order of the cached objects based on the reference data. The construction principle of the access sequence list is as follows: the reference data of the recently invoked cached object is moved to the head of the access order list; alternatively, the reference data of the cached object that was recently cached is added to the head of the access order list.
For example, referring to FIG. 4, the access order list corresponding to the cache container is implemented as a doubly linked list 401, where the doubly linked list 401 distinguishes between a head portion corresponding to the most recently added or accessed element node and a tail portion corresponding to the least recently accessed element node. The information corresponding to the element node may include reference data (i.e., value), an index (i.e., key) of the reference data, a location of the element node, and an occupied memory of the cached object corresponding to the reference data. The element node1 (node 1), the element node2 (node 2) and the element node3 (node 3) are sequentially constructed and added into the doubly linked list 401 in sequence, so that the doubly linked list 401 in fig. 4 can be obtained.
The node elements in the doubly linked list 401 may be quickly indexed by another hash table 402, which hash table 402 may be constructed from an index of reference data that is ordered in the hash table 402 consistent with its order in the doubly linked list 401.
In the case that the client needs to refer to the cached object, the client queries the hash table 402, and if the index of the reference data of the cached object exists in the hash table 402, the application data of the cached object can be quickly determined in the access sequence list 401 according to the index, so as to implement the reference of the cached object.
When an element node in the access order list is accessed, the element node is moved to the head of the access order list. For example, referring to fig. 5, if the cached object corresponding to element node 1 is accessed, element node 1 is moved to the head of doubly linked list 401, and element node 2 is located at the tail of doubly linked list 401.
When an object to be cached is cached, its corresponding element node is added to the head of the access sequence table. For example, referring to FIG. 6, when an object to be cached is cached, the client creates a new element node 4 for it based on its corresponding reference data, and then adds element node 4 to the head of doubly linked list 401 (for illustration only, memory limitations are not considered).
If an element node is not recently added and is not accessed, the element node is automatically adjusted to the tail of the access sequence table without manual intervention, and the cached object corresponding to the tail is preferentially eliminated. In the embodiment of the application, the movement of the element node does not affect the storage address of the corresponding cached object, and is only used for indicating the condition that the cached object is accessed.
In one example, the client discards the object to be cached when the occupied memory of the object to be cached is greater than the memory threshold, that is, the data size of the object to be cached is too large, and the client does not cache the object to be cached, so that the influence on the performance of the cache container can be avoided.
Step 203, releasing the reference data at the tail of the access sequence list corresponding to the cache container under the condition that the total occupied memory is greater than the memory threshold value, and obtaining the updated total occupied memory of the cache container, where the access sequence list is used for recording the access sequence of the cached object based on the reference data.
If the total occupied memory of the cache container is greater than the memory threshold, it may be determined that the memory used by the cache container after caching the object to be cached exceeds the memory limit, so that the occupied memory of the cache container needs to be reduced, so as to control the total occupied memory of the cache container to be at the memory threshold.
The updated total occupied memory of the cache container refers to the total occupied memory after the reference data corresponding to the tail is released, and the occupied memory of the cached object with the reference data released does not belong to the cache container, so the updated total occupied memory of the cache container refers to the total occupied memory after the occupied memory of the cached object with the reference data released is removed.
For example, referring to fig. 7, the memory threshold corresponding to the cache container is 3, and the access sequence list 700 includes, in order from the head to the tail, element node 4, element node 3, element node 2, and element node 1. The cached object (i.e., the object to be cached) corresponding to the element node 4 is newly cached in the cache container. The occupied memory of the cached object corresponding to the element node 4 is 2 (refers to occupied memory), the occupied memory corresponding to each of the element node 1, the element node 2 and the element node 3 is 1, and at this time, the total occupied memory of the cache container is 2+1+1+1= 5>3, so that the element node 1 at the tail needs to be released, and the updated total occupied memory of the cache container is: 2+1+1=4.
Optionally, the objects in the embodiment of the present application are managed by a garbage collection mechanism, that is, the unreachable objects in the cache container are found and collected by the garbage collector, and the collected cached objects release the memory occupied in the cache container. If the reference to the cached object is released, but not reclaimed by the garbage collector, the cached object does not release its occupied memory in the cache container, but the occupied memory of the cached object does not belong to the cache container.
And step 204, stopping releasing the reference data under the condition that the updated total occupied memory is less than or equal to the memory threshold value.
If the total occupied memory of the updated cache container is smaller than or equal to the memory threshold, the memory used by the cache container after the cached object corresponding to the tail reference data is eliminated can be judged not to exceed the memory limit, so that the cache container does not need to be occupied again, the final total occupied memory of the cache container is smaller than or equal to the memory threshold, and the total occupied memory of the cache container is controllable.
For example, referring to fig. 8, the element node 4 is added to the center of the access sequence list 800, if the occupied memory of the cached object corresponding to the newly added element node 4 is 1 and the memory threshold of the cache container is 3, after the element node 1 is released, the total occupied memory of the cache container after updating is 3, that is, the total occupied memory of the cache container after updating is equal to the memory threshold, and the client may stop releasing the reference data.
Optionally, under the condition that the total occupied memory after updating of the cache container is greater than the memory threshold, releasing the reference data located at the tail of the access sequence list is continued until the total occupied memory after updating of the cache container is less than or equal to the memory threshold.
For example, referring to fig. 7 and 9, since the total occupied memory after the update of the cache container is: 2+1+1= 4>3, so the release of element node 2 at the tail of the access sequence list is further continued, and the total occupied memory of the cache container is updated as follows: 2+1=3, i.e. the total occupied memory after the update of the cache container is equal to the memory threshold, the client may stop releasing the reference data.
In summary, according to the technical scheme provided by the embodiment of the application, the reference data of the cached objects are released under the condition that the total occupied memory of the cache container exceeds the memory limit, so that the total occupied memory of the cache container is reduced, the cache container is limited by taking the occupied memory as a unit, the problem that the occupied memory is uncontrollable when the cache container is limited by the number of the cache objects is solved, and the total occupied memory of the cache container can be accurately controlled by taking the occupied memory as a unit, so that the stability of the total occupied memory of the cache container is improved.
In some embodiments, for the released reference data in the above embodiments, since the cache container no longer holds the reference data, when the cached object corresponding to the reference data is accessed again, the client cannot query the reference data in the access order list, and loads the cached object again to cache it in the cache container, and creates a new element node for it. If the memory occupied by the cached object is not released, it will occupy two parts of memory in the cache container, and so on, there may be a large number of repeated cached objects in the cache container, thereby causing waste of memory resources.
In order to avoid wasting memory resources, the embodiment of the present application may further include the following for the released reference data in the above embodiment.
The client also adds the released reference data to the weak reference pool, i.e. the client also adds the element node to which the reference data corresponds to the weak reference pool. Wherein, since the object in the weak reference pool is held in the weak reference mode, the reference mode of the released reference data is switched to the weak reference mode, that is, the reference corresponding to the reference data is switched to the weak reference.
Optionally, the cached object corresponding to the reference data in the weak reference mode has possibility of being recycled, and the cached object corresponding to the reference data in the weak reference mode has accessibility. For example, for garbage collector, if a cached object is only weakly referenced and not strongly referenced, then the cached object is still unreachable and the next garbage collection will be performed on the cached object to free up the cached object's occupied memory, but code may still access the cached object because the cached object is weakly referenced.
For example, referring to fig. 9 and 10, element node 1 and element node 2 released in the access order list 700 are temporarily stored in the weak reference pool 1000, and element node 1 and element node 2 are referenced by the weak reference pool 1000 in a weak reference manner.
For the released reference data, when the cached object corresponding to the reference data is accessed again, although the cache container does not hold the reference data, the released reference data can still be found from the weak reference pool, so that the access to the cached object corresponding to the released reference data is realized without loading the cached object again, and memory is allocated.
Based on the above, the embodiment of the present application may further include the following: and under the condition that the first cached object corresponding to the first reference data in the weak reference pool is referenced by the client, if the first cached object is not recovered, the first reference data is moved back to the head of the access sequence table.
Wherein the first reference data may refer to any released reference data in the weak reference pool. The first reference data is used to reference a first cached object. In the event that the first cached object is not reclaimed, if the first cached object is accessed again, the first reference data may be re-added to the head of the access order list without having to re-create a new element node for the first cached object.
For example, referring to fig. 11, the weak reference pool 900 originally includes the element node 2 and the element node 1, if the cached object corresponding to the element node 1 is not reclaimed, if the cached object is accessed again, the element node 1 may be directly added to the header of the access sequence list 700, and the element node 1 is deleted from the weak reference pool 900, at this time, the client may query the element node 1 from the access sequence list 700 again, so as to implement that the cached object corresponding to the element node 1 is called again. Because the cached object corresponding to the element node 1 is not recycled, the element node 1 does not need to be loaded and allocated with memory again, namely only occupies one memory, thereby avoiding the waste of memory resources. In addition, after the element node 1 is moved back to the access sequence table 700, if the total occupied memory of the cache container exceeds the memory threshold, the element node 3 located at the tail of the access sequence table 700 needs to be released, and the released element node 3 may be temporarily stored in the weak reference pool 900.
Optionally, if the first cached object is reclaimed, the memory occupied by the first cached object in the cache container is released, and the first reference data is deleted. That is, when the first reference cache object is needed again, the client needs to reload the first reference cache object and cache into the cache container.
In summary, according to the technical scheme provided by the embodiment of the application, the released reference data is temporarily stored in the weak reference pool, so that the corresponding cached object is prevented from being repeatedly loaded and memory resources are repeatedly allocated, the non-allocation frequency of the memory resources is effectively reduced, and the waste of the memory resources is avoided.
In some embodiments, taking CDN graph cache as an example, the technical solutions provided by the embodiments of the present application are described, and referring to fig. 12 and fig. 13, the details may include the following.
1. An attempt is made to load the CDN map.
The method comprises the steps that a client side responds to triggering operation of a user on a CDN graph, a CDN graph acquisition request is sent to a CDN node (server), the CDN node determines the CDN graph according to identification information in the CDN graph acquisition request, and the CDN graph is sent to terminal equipment where the client side is located. After the terminal equipment obtains the CDN diagram, the CDN diagram is stored in a local hard disk, and the client loads the CDN diagram into a cache container corresponding to the client from the local hard disk. For example, when a user opens an active tab in a game activity interface, where the active tab includes a CDN map, the client automatically generates and sends a CDN map acquisition request to download the CDN map, and the client attempts to load the downloaded CDN map into a cache container.
2. And acquiring the occupied memory of the CDN graph.
And the client evaluates the occupied memory of the CDN graph according to the memory evaluation function to obtain the memory required to be occupied by the CDN graph. Alternatively, if there is no data in the local hard disk of the terminal device, downloading is required, and the data in the local hard disk that needs to be cached in the cache container may be referred to as an object to be cached.
3. The client judges whether the occupied memory of the CDN graph is larger than a memory threshold value or not.
The memory threshold is a memory threshold of a cache container corresponding to a client (such as a game engine), and may be set according to a user requirement.
4. If the occupied memory of the CDN graph is larger than the memory threshold, the CDN graph is directly abandoned.
5. If the occupied memory of the CDN graph is smaller than or equal to the memory threshold, the CDN graph is cached in the cache container.
6. The client judges whether the total occupied memory of the cache container is larger than a memory threshold value.
7. If the occupied memory of the CDN map is greater than the memory threshold, discarding the element node located at the tail of the access sequence list of the cache container, that is, releasing the reference data located at the tail of the access sequence list, and then re-executing the step 6 until the occupied memory of the CDN map is less than or equal to the memory threshold.
Thus, the cache management of a CDN graph can be completed. How the client references the CDN map from the cache container will be described below.
1. Attempts were made to reference the CDN map.
The client tries to reference the CDN map from its corresponding cache container, e.g., the client sends a CDN map load instruction to pull the CDN map from the cache container.
2. And judging whether the reference data of the CDN graph exists in the access sequence list.
And the client queries the access sequence list of the cache container according to the key to determine whether the access sequence list contains the reference data of the CDN map.
3. If the reference data of the CDN graph exists in the access sequence list, the client can load and return the DCN graph according to the reference data.
Optionally, if the same key exists in the access sequence list, it may be determined that reference data of the CDN map exists in the access sequence list, where the reference data may include a storage address of the CDN map, and the client may obtain the CDN map according to the storage address of the CDN map.
4. If the access sequence list does not contain the reference data of the CDN graph, the client judges whether the weak reference pool contains the reference data of the CDN graph or not.
Optionally, the client queries the weak reference pool of the cache container according to the key to determine whether the reference data of the CDN map exists in the weak reference pool.
5. In the case that there is reference data of the CDN map in the weak reference pool, the client rejoins the access order list with the reference data.
If the reference data of the CDN graph exists in the reference pool, the reference data of the CDN graph can be judged to be released by the cache container, but not recovered, and the reference data can be recycled so as to avoid reloading. The CDN map at this point is weakly referenced so it can also be accessed by clients. Optionally, after downloading the CDN map, the terminal device may store the CDN map in the local hard disk, so that the client may directly load the CDN map from the local hard disk without re-downloading the CDN map by the terminal device.
6. The client deletes the reference data of the CDN map from the weak reference pool.
7. The client may then load and return the DCN map based on the reference data.
8. In the case that there is no reference data for the CDN map in the weak reference pool, the client will reload the CDN map and cache into the cache container for loading by the client.
In some embodiments, in the related art, under the condition that the cache container is fully loaded, the total occupied memory of the cache container fluctuates within the range of 390KB to 81000KB, so that accurate control of the total occupied memory of the cache container cannot be realized. On the other hand, for the cached object reloading scene that the reference data is released but not recovered, the technical scheme provided by the embodiment of the application also achieves effective optimization on the aspect of memory growth.
Taking a game application program as an example, a curve 1401 in fig. 14 illustrates test data of total occupied memory of a cache container when a user traverses all active tabs for multiple times in an active user interface of a game (for example, each active tab involves loading of a CDN map), and a curve 1501 in fig. 15 illustrates test data of total occupied memory of a cache container when a user traverses all active tabs for multiple times in an active user interface of a game (for example, each active tab involves loading of a CDN map).
As shown in curve 1401, the user has a user interface from 1:08 begin traversing active tabs until 2:33, the total occupied memory of the cache container is increased from 838M to 1317MB, the whole process is in a growing trend without slowing, and after garbage collection, the total occupied memory of the cache container is reduced by 350MB to 967MB, wherein the 350MB data comprises a large number of repeatedly loaded CDN graphs which are released for reference and are not recycled.
As shown by curve 1501, the user is from 1:10 starts traversing the active tabs until 4:00 stops, the total occupied memory of the cache container tends to be stable after increasing from 835MB to 1042MB, and does not obviously increase, namely after traversing the first round of active tabs, the subsequent traversing does not cause a large amount of memory allocation, and after garbage recovery, the total occupied memory of the cache container is reduced by 80MB and falls back to 962MB. Therefore, the technical scheme provided by the embodiment of the application can effectively control the total occupied memory of the cache container and can effectively avoid the waste of memory resources.
In summary, according to the technical scheme provided by the embodiment of the application, the reference data of the cached objects are released under the condition that the total occupied memory of the cache container exceeds the memory limit, so that the total occupied memory of the cache container is reduced, the cache container is limited by taking the occupied memory as a unit, the problem that the occupied memory is uncontrollable when the cache container is limited by the number of the cache objects is solved, and the total occupied memory of the cache container can be accurately controlled by taking the occupied memory as a unit, so that the stability of the total occupied memory of the cache container is improved.
In addition, the released reference data is temporarily stored in the weak reference pool, so that the corresponding cached object can be prevented from being repeatedly loaded and repeatedly allocated with the memory resource, the non-allocation frequency of the memory resource is effectively reduced, and the waste of the memory resource is avoided.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Referring to fig. 16, a block diagram of a processing apparatus for buffering objects according to an embodiment of the present application is shown. The device has the function of realizing the method example, and the function can be realized by hardware or can be realized by executing corresponding software by hardware. The device may be the terminal device described above, or may be provided in the terminal device. As shown in fig. 16, the apparatus 1600 includes: an occupied memory fetch module 1601, a total memory fetch module 1602, and a reference data release module 1603.
The occupied memory acquiring module 1601 is configured to acquire occupied memory of an object to be cached, where the occupied memory is used to indicate a memory that the object to be cached needs to occupy.
A total memory obtaining module 1602, configured to cache the object to be cached to the cache container, to obtain a total occupied memory of the cache container, when the occupied memory is less than or equal to a memory threshold corresponding to the cache container; the total occupied memory refers to the sum value of occupied memory of cached objects in the cache container, wherein the cached objects are not released with reference data, and the reference data is used for referencing the cached objects.
And the reference data releasing module 1603 is configured to release reference data located at the tail of an access sequence list corresponding to the cache container, where the total occupied memory is greater than the memory threshold, to obtain updated total occupied memory of the cache container, and the access sequence list is used to record the access sequence of the cached object based on the reference data.
The reference data releasing module 1603 is further configured to stop releasing the reference data if the updated total occupied memory is less than or equal to the memory threshold.
In some embodiments, the reference data release module 1603 is further configured to, in a case where the updated total occupied memory is greater than the memory threshold, continue to release the reference data located at the tail of the access order list until the updated total occupied memory of the cache container is less than or equal to the memory threshold.
In some embodiments, as shown in fig. 17, the apparatus 1600 further comprises: reference data transfer module 1604.
The reference data transfer module 1604 is used for adding the released reference data to a weak reference pool; the object in the weak reference pool is held in a weak reference mode, the cached object corresponding to the reference data in the weak reference mode has possibility of being recycled, and the cached object corresponding to the reference data in the weak reference mode has accessibility.
In some embodiments, as shown in fig. 17, the apparatus 1600 further comprises: reference data recovery module 1605.
And the reference data recovery module 1605 is configured to, if the first cached object corresponding to the first reference data in the weak reference pool is referred to, move the first reference data back to the head of the access sequence list if the first cached object is not reclaimed.
In some embodiments, as shown in fig. 17, the apparatus 1600 further comprises: reference data deletion module 1606.
A reference data deleting module 1606, configured to release the memory occupied by the first cached object in the cache container if the first cached object is reclaimed, and delete the first reference data.
In some embodiments, as shown in fig. 17, the apparatus 1600 further comprises: the cache object discard module 1607.
And a cache object discarding module 1607, configured to discard the object to be cached if the occupied memory is greater than the memory threshold.
In some embodiments, as shown in fig. 17, the apparatus 1600 further comprises: the object management module 1608 is cached.
A cache object management module 1608, configured to manage cached objects in the cache container by using a least recently used LRU elimination policy; wherein reference data of the recently invoked cached object is moved to the head of the access order list; alternatively, the reference data of the cached object that was recently cached is added to the head of the access order list.
In summary, according to the technical scheme provided by the embodiment of the application, the reference data of the cached objects are released under the condition that the total occupied memory of the cache container exceeds the memory limit, so that the total occupied memory of the cache container is reduced, the cache container is limited by taking the occupied memory as a unit, the problem that the occupied memory is uncontrollable when the cache container is limited by the number of the cache objects is solved, and the total occupied memory of the cache container can be accurately controlled by taking the occupied memory as a unit, so that the stability of the total occupied memory of the cache container is improved.
It should be noted that, in the apparatus provided in the foregoing embodiment, when implementing the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be implemented by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to FIG. 18, a block diagram of a computer device according to one embodiment of the present application is shown. The computer device may be configured to implement the method for processing a cache object provided in the foregoing embodiment. Specifically, the following may be included.
The computer device 1800 includes a central processing unit (such as a CPU (Central Processing Unit, central processing unit), a GPU (Graphics Processing Unit, graphics processor), an FPGA (Field Programmable Gate Array ), etc.) 1801, a system Memory 1804 including a RAM (Random-Access Memory) 1802 and a ROM (Read-Only Memory) 1803, and a system bus 1805 connecting the system Memory 1804 and the central processing unit 1801. The computer device 1800 also includes a basic input/output system (Input Output System, I/O system) 1806, which facilitates the transfer of information between various devices within the computer device, and a mass storage device 1807 for storing an operating system 1813, application programs 1814, and other program modules 1815.
The basic input/output system 1806 includes a display 1808 for displaying information and an input device 1809, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1808 and the input device 1809 are coupled to the central processing unit 1801 via an input output controller 1810 coupled to the system bus 1805. The basic input/output system 1806 can also include an input/output controller 1810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 1810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1807 is connected to the central processing unit 1801 through a mass storage controller (not shown) connected to the system bus 1805. The mass storage device 1807 and its associated computer-readable media provide non-volatile storage for the computer device 1800. That is, the mass storage device 1807 may include a computer readable medium (not shown) such as a hard disk or CD-ROM (Compact Disc Read-Only Memory) drive.
Without loss of generality, the computer readable medium may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc, high density digital video disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the ones described above. The system memory 1804 and mass storage 1807 described above may be referred to collectively as memory.
The computer device 1800 may also operate in accordance with embodiments of the application, through a network, such as the internet, to remote computers connected to the network. I.e., the computer device 1800 may connect to the network 1812 through a network interface unit 1811 connected to the system bus 1805, or other types of networks or remote computer systems (not shown), using the network interface unit 1811.
The memory also includes a computer program stored in the memory and configured to be executed by the one or more processors to implement the method of processing a cache object described above.
In some embodiments, a computer readable storage medium is also provided, in which a computer program is stored, which when executed by a processor of a computer device, implements the above-mentioned method of processing a cache object.
Alternatively, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State Drives, solid State disk), optical disk, or the like. The random access memory may include ReRAM (Resistance Random Access Memory, resistive random access memory) and DRAM (Dynamic Random Access Memory ), among others.
In some embodiments, a computer program product is also provided, the computer program product comprising a computer program stored in a computer readable storage medium. A processor of a computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program so that the computer device executes the processing method of the cache object.
It should be noted that, in the embodiment of the present application, before and during the process of collecting the relevant data of the user, a prompt interface, a popup window or output voice prompt information may be displayed, where the prompt interface, the popup window or the voice prompt information is used to prompt the user to collect the relevant data currently, so that the present application only starts to execute the relevant step of obtaining the relevant data of the user after obtaining the confirmation operation of the user on the prompt interface or the popup window, otherwise (i.e. when the confirmation operation of the user on the prompt interface or the popup window is not obtained), the relevant step of obtaining the relevant data of the user is finished, i.e. the relevant data of the user is not obtained. In other words, all user data collected by the method are processed strictly according to the requirements of relevant national laws and regulations, informed consent or independent consent of the personal information body is collected under the condition that the user agrees and authorizes, and the subsequent data use and processing actions are carried out within the scope of laws and regulations and the authorization of the personal information body, and the collection, use and processing of relevant user data are required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the cache object, occupied memory, reference data and the like referred to in the present application are all acquired under the condition of sufficient authorization.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the step numbers described herein are merely exemplary of one possible execution sequence among steps, and in some other embodiments, the steps may be executed out of the order of numbers, such as two differently numbered steps being executed simultaneously, or two differently numbered steps being executed in an order opposite to that shown, which is not limiting.
The foregoing description of the exemplary embodiments of the application is not intended to limit the application to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.

Claims (10)

1. A method for processing a cache object, the method comprising:
acquiring occupied memory of an object to be cached, wherein the occupied memory is used for indicating the memory required to be occupied by the object to be cached;
Caching the object to be cached to the cache container under the condition that the occupied memory is smaller than or equal to a memory threshold value corresponding to the cache container, and acquiring the total occupied memory of the cache container; the total occupied memory refers to the sum value of occupied memory of cached objects which are not released with reference data in the cache container, wherein the reference data is used for referencing the cached objects;
releasing the reference data positioned at the tail part of the access sequence list corresponding to the cache container under the condition that the total occupied memory is larger than the memory threshold value, and acquiring the updated total occupied memory of the cache container, wherein the access sequence list is used for recording the access sequence of the cached object based on the reference data;
and stopping releasing the reference data under the condition that the updated total occupied memory is smaller than or equal to the memory threshold value.
2. The method according to claim 1, wherein the method further comprises:
and under the condition that the updated total occupied memory is larger than the memory threshold, continuing to release the reference data positioned at the tail part of the access sequence list until the updated total occupied memory of the cache container is smaller than or equal to the memory threshold.
3. The method according to claim 1, wherein the method further comprises:
adding the released reference data to a weak reference pool;
the object in the weak reference pool is held in a weak reference mode, the cached object corresponding to the reference data in the weak reference mode has possibility of being recycled, and the cached object corresponding to the reference data in the weak reference mode has accessibility.
4. A method according to claim 3, characterized in that the method further comprises:
and under the condition that a first cached object corresponding to first reference data in the weak reference pool is referenced, if the first cached object is not recycled, moving the first reference data back to the head of the access sequence list.
5. The method according to claim 4, wherein the method further comprises:
and if the first cached object is recycled, releasing the memory occupied by the first cached object in the cache container, and deleting the first reference data.
6. The method according to claim 1, wherein the method further comprises:
and discarding the object to be cached under the condition that the occupied memory is larger than the memory threshold.
7. The method according to claim 1, wherein the method further comprises:
managing cached objects in the cache container by adopting a least recently used LRU elimination strategy;
wherein reference data of the recently invoked cached object is moved to the head of the access order list; alternatively, the reference data of the cached object that was recently cached is added to the head of the access order list.
8. A processing apparatus for buffering objects, the apparatus comprising:
the occupied memory acquisition module is used for acquiring occupied memory of an object to be cached, wherein the occupied memory is used for indicating the memory required to be occupied by the object to be cached;
the total memory acquisition module is used for caching the object to be cached to the cache container under the condition that the occupied memory is smaller than or equal to a memory threshold value corresponding to the cache container, so as to acquire the total occupied memory of the cache container; the total occupied memory refers to the sum value of occupied memory of cached objects which are not released with reference data in the cache container, wherein the reference data is used for referencing the cached objects;
the reference data release module is used for releasing the reference data positioned at the tail part of the access sequence list corresponding to the cache container under the condition that the total occupied memory is larger than the memory threshold value, acquiring the updated total occupied memory of the cache container, wherein the access sequence list is used for recording the access sequence of the cached object based on the reference data;
And the reference data release module is further configured to stop releasing the reference data when the updated total occupied memory is less than or equal to the memory threshold.
9. A computer device comprising a processor and a memory, the memory having stored therein a computer program that is loaded and executed by the processor to implement a method of processing a cache object according to any of claims 1 to 7.
10. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, the computer program being loaded and executed by a processor to implement the method of processing a cache object according to any one of claims 1 to 7.
CN202311397972.4A 2023-10-26 2023-10-26 Processing method, device, equipment and storage medium for cache object Active CN117130792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311397972.4A CN117130792B (en) 2023-10-26 2023-10-26 Processing method, device, equipment and storage medium for cache object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311397972.4A CN117130792B (en) 2023-10-26 2023-10-26 Processing method, device, equipment and storage medium for cache object

Publications (2)

Publication Number Publication Date
CN117130792A true CN117130792A (en) 2023-11-28
CN117130792B CN117130792B (en) 2024-02-20

Family

ID=88858633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311397972.4A Active CN117130792B (en) 2023-10-26 2023-10-26 Processing method, device, equipment and storage medium for cache object

Country Status (1)

Country Link
CN (1) CN117130792B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493400A (en) * 2024-01-02 2024-02-02 中移(苏州)软件技术有限公司 Data processing method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130173627A1 (en) * 2011-12-29 2013-07-04 Anand Apte Efficient Deduplicated Data Storage with Tiered Indexing
CN106528444A (en) * 2016-12-05 2017-03-22 北京金和网络股份有限公司 Automatic management method of object cached in memory
CN107562782A (en) * 2017-07-24 2018-01-09 广东电网有限责任公司信息中心 A kind of multi-level buffer method, apparatus and system based on CIM
CN108228649A (en) * 2016-12-21 2018-06-29 伊姆西Ip控股有限责任公司 For the method and apparatus of data access
CN108628770A (en) * 2017-03-23 2018-10-09 英特尔公司 For high-performance cache based on the hot follow-up mechanism enhancing of least recently used
CN110032421A (en) * 2019-04-18 2019-07-19 腾讯科技(深圳)有限公司 The management method of atlas, device, terminal and storage medium in memory
CN110555118A (en) * 2018-03-28 2019-12-10 武汉斗鱼网络科技有限公司 Method and device for loading picture
US20200183840A1 (en) * 2018-12-10 2020-06-11 International Business Machines Corporation Caching data from remote memories

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130173627A1 (en) * 2011-12-29 2013-07-04 Anand Apte Efficient Deduplicated Data Storage with Tiered Indexing
CN106528444A (en) * 2016-12-05 2017-03-22 北京金和网络股份有限公司 Automatic management method of object cached in memory
CN108228649A (en) * 2016-12-21 2018-06-29 伊姆西Ip控股有限责任公司 For the method and apparatus of data access
CN108628770A (en) * 2017-03-23 2018-10-09 英特尔公司 For high-performance cache based on the hot follow-up mechanism enhancing of least recently used
CN107562782A (en) * 2017-07-24 2018-01-09 广东电网有限责任公司信息中心 A kind of multi-level buffer method, apparatus and system based on CIM
CN110555118A (en) * 2018-03-28 2019-12-10 武汉斗鱼网络科技有限公司 Method and device for loading picture
US20200183840A1 (en) * 2018-12-10 2020-06-11 International Business Machines Corporation Caching data from remote memories
CN110032421A (en) * 2019-04-18 2019-07-19 腾讯科技(深圳)有限公司 The management method of atlas, device, terminal and storage medium in memory

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117493400A (en) * 2024-01-02 2024-02-02 中移(苏州)软件技术有限公司 Data processing method and device and electronic equipment
CN117493400B (en) * 2024-01-02 2024-04-09 中移(苏州)软件技术有限公司 Data processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN117130792B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
US9641468B2 (en) Method, server, client, and system for releasing instant messaging key-value data
US7676554B1 (en) Network acceleration device having persistent in-memory cache
JP5814436B2 (en) Caching information system and method
US7644108B1 (en) Network acceleration device cache supporting multiple historical versions of content
CN107491523B (en) Method and device for storing data object
CN117130792B (en) Processing method, device, equipment and storage medium for cache object
CN107197359B (en) Video file caching method and device
CN112052097B (en) Virtual scene rendering resource processing method, device, equipment and storage medium
CN110457305B (en) Data deduplication method, device, equipment and medium
CN106202082B (en) Method and device for assembling basic data cache
CN111198856A (en) File management method and device, computer equipment and storage medium
CN106951573A (en) A kind of living broadcast interactive data load method, server and computer-readable medium
CN107545050A (en) Data query method and device, electronic equipment
US9021208B2 (en) Information processing device, memory management method, and computer-readable recording medium
US11474943B2 (en) Preloaded content selection graph for rapid retrieval
CN112395437B (en) 3D model loading method and device, electronic equipment and storage medium
WO2023125875A1 (en) Correlation-based streaming method for game data
CN110825652B (en) Method, device and equipment for eliminating cache data on disk block
CN113076067A (en) Method and device for eliminating cache data
CN117435136A (en) Interface data caching method, device, equipment and medium
CN110247939A (en) The high-performance combination frame realized using multi-level buffer technology
US11714803B1 (en) System and method for handling implicit transactions in a hybrid cloud cache
CN112699082A (en) File access request response method and device
US11755534B2 (en) Data caching method and node based on hyper-converged infrastructure
CN111737298A (en) Cache data control method and device based on distributed storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant