CN110908612A - Cache management method, device, equipment and storage medium - Google Patents

Cache management method, device, equipment and storage medium Download PDF

Info

Publication number
CN110908612A
CN110908612A CN201911180121.8A CN201911180121A CN110908612A CN 110908612 A CN110908612 A CN 110908612A CN 201911180121 A CN201911180121 A CN 201911180121A CN 110908612 A CN110908612 A CN 110908612A
Authority
CN
China
Prior art keywords
cache
temperature information
resource
material resources
material resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911180121.8A
Other languages
Chinese (zh)
Other versions
CN110908612B (en
Inventor
曾国亮
王旭新
李振
朱光育
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911180121.8A priority Critical patent/CN110908612B/en
Publication of CN110908612A publication Critical patent/CN110908612A/en
Application granted granted Critical
Publication of CN110908612B publication Critical patent/CN110908612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a cache management method, a cache management device, cache management equipment and a storage medium, and belongs to the technical field of storage. The embodiment provides a method for caching and eliminating material resources of a virtual scene based on temperature information, wherein the temperature information is adopted to represent the probability of the material resources being accessed in the virtual scene, the material resources are cached by using positions corresponding to the temperature information in a cache queue, and the cache queue is cached and eliminated according to the sequence of the positions from back to front. By the method, cold resources of the virtual scene are eliminated first, and hot resources of the virtual scene are eliminated later. On one hand, by prolonging the residence time of the hot spot resource in the cache, the probability of cache hit is improved when the hot spot resource is accessed, so that the cache hit rate is improved, and the problem of cache pollution is solved. On the other hand, the cold resources in the cache are cleared as early as possible, so that the cache space is saved.

Description

Cache management method, device, equipment and storage medium
Technical Field
The present application relates to the field of storage technologies, and in particular, to a cache management method, apparatus, device, and storage medium.
Background
Cache, as a storage medium with access speed much faster than that of a hard disk and a memory, is an extremely precious storage material resource for a Central Processing Unit (CPU) of a computer. Therefore, the computer needs to scientifically manage the cache so that the frequently accessed material resources are resident in the cache, and the infrequently accessed material resources are timely cleaned from the cache, thereby utilizing the cache to the maximum extent. In view of this, the computer usually uses a cache elimination algorithm to find out which material resource is the material resource that should be eliminated from the cache, and deletes the material resource from the cache, thereby releasing the cache space occupied by the material resource.
Currently, a computer generally performs material resource caching and cache elimination based on a Least Recently Used (LRU) algorithm to achieve the purpose of managing the cache. Specifically, a linked list can be used as a data cache structure, and each material resource is sequentially inserted into the linked list according to the sequence of access time points, the material resource with the latest access time point, that is, the latest access material resource is located at the first position in the linked list, the material resource with the earliest access time point is located at the last position in the linked list, and so on. If an access request for the cached material resources is received, the material resources in the linked list are moved from the historical position to the first position of the linked list, and other material resources in the linked list are correspondingly adjusted according to the original sequence. If an access request for an uncached material resource is received, the last material resource in the linked list is cached and eliminated, each material resource in the linked list is moved backwards by one position, the material resource is read from the hard disk, and the material resource is inserted into the first position of the linked list.
Experiments show that when the method is used for caching the material resources, the cache hit rate is sharply reduced in a periodic scene and an accidental scene, and the problem of cache pollution is caused.
Disclosure of Invention
The embodiment of the application provides a cache management method, a cache management device and a cache management storage medium, which can solve the problem of cache pollution in the related technology. The technical scheme is as follows:
in one aspect, a cache management method is provided, and the method includes:
determining a target cache position of a material resource according to temperature information of the material resource in a virtual scene, wherein the temperature information represents the probability that the material resource is accessed in the virtual scene, and the larger the temperature information is, the more the target cache position is;
caching the material resources to the target cache position in a cache queue;
and caching and eliminating the cached material resources of the cache queue according to the sequence of the positions from back to front.
In another aspect, an apparatus for cache management is provided, the apparatus including:
the determining module is used for determining a target cache position of a material resource according to temperature information of the material resource in a virtual scene, wherein the temperature information represents the probability that the material resource is accessed in the virtual scene, and the larger the temperature information is, the more the target cache position is;
the caching module is used for caching the material resources to the target caching position in a caching queue;
and the elimination module is used for carrying out cache elimination on the cached material resources of the cache queue according to the sequence of the positions from back to front.
Optionally, the buffer queue includes multiple intervals, each interval corresponds to a temperature range, and the determining module is configured to determine a target interval from the multiple intervals, where the target interval is an interval corresponding to a value range to which the temperature information belongs; and determining the target cache position from the target interval.
Optionally, the determining module is configured to determine a first position of the target interval as the target cache position; or comparing the temperature information of the material resources with the temperature information of the material resources cached in the target interval to obtain the target caching position, wherein the material resources cached in the target interval are arranged in the descending order of the temperature information.
Optionally, the elimination module is configured to access a last interval in the buffer queue; and caching and eliminating the cached material resources in the last interval.
Optionally, the elimination module is further configured to, if the material resource is not cached in the last interval, continue to access other intervals before the last interval according to an order from the last interval to the first interval; and caching and eliminating the material resources in the other intervals.
Optionally, the apparatus further comprises:
the monitoring module is used for monitoring the number of the cached material resources in the first interval in the cache queue;
and the adjusting module is used for adjusting each material resource cached in the cache queue from the current interval to the next interval if the number of the material resources cached in the first interval reaches a threshold value.
Optionally, the material resources include a first material resource, the first material resource is a material resource that is not cached in the cache queue, and the cache module is configured to insert the first material resource into the target cache position in the cache queue.
Optionally, the material resources include second material resources, the second material resources are material resources that have been cached by the cache queue, and the cache module is configured to adjust the second material resources from a historical cache position in the cache queue to the target cache position.
Optionally, the temperature information includes first temperature information, and the apparatus further includes: and the reading module is used for reading the first temperature information from the configuration file of the material resource.
Optionally, the acquiring of the first temperature information includes:
acquiring access probability characteristics of the material resources according to an access request set of at least one client of the virtual scene, wherein the access request set comprises access requests for the material resources and access requests for other material resources in the virtual scene;
and acquiring first temperature information of the material resources according to the access probability characteristics of the material resources.
Optionally, the access probability feature includes an expectation of an access probability, and the expectation of the access probability acquiring process includes:
acquiring a first time and a second time according to the access request set, wherein the first time is the total number of times that the material resource is accessed, and the second time is the sum of the total number of times that the material resource and other material resources in the virtual scene are accessed;
and acquiring the expectation of the access probability according to the first times and the second times, wherein the expectation of the access probability is the ratio of the first times to the second times.
Optionally, the temperature information includes second temperature information, and the apparatus further includes: and the updating module is used for updating the historical temperature information of the material resources according to the access request of the material resources to obtain the second temperature information, and the second temperature information is greater than the historical temperature information.
In another aspect, an electronic device is provided, which includes one or more processors and one or more memories, and at least one program code is stored in the one or more memories, and loaded into and executed by the one or more processors to implement the operations performed by the above cache management method.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement the operations performed by the above cache management method.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the embodiment provides a method for caching and eliminating material resources of a virtual scene based on temperature information, wherein the temperature information is adopted to represent the probability of the material resources being accessed in the virtual scene, the material resources are cached by using positions corresponding to the temperature information in a cache queue, and the cache queue is cached and eliminated according to the sequence of the positions from back to front. In this way, the higher the probability that the material resource is accessed, the higher the temperature information of the material resource is, and the later the elimination time point of the material resource is. That is, the cold resources of the virtual scene are eliminated first, and the hot resources of the virtual scene are eliminated later. Therefore, on one hand, by prolonging the residence time of the hot resource in the cache, the probability of cache hit is improved when the hot resource is accessed, so that the cache hit rate is improved, and the problem of cache pollution is solved. On the other hand, the cold resources in the cache are cleared as early as possible, so that the cache space is saved.
Particularly, in a scene of periodic access, if the material resources are accessed periodically, the temperature information of the material resources is maintained at a higher level, the material resources are arranged at a front position in the cache queue, and the elimination priority of the material resources is lower, so that the material resources are not easily eliminated, thereby prolonging the residence time of the material resources in the cache and improving the hit rate of the cache in the periodic scene. Therefore, the problem of cache pollution which is difficult to avoid when the LRU algorithm and other large cache elimination algorithms face a periodic scene is solved, and the performance of the cache elimination algorithms is greatly optimized.
In addition, in an accidental access scene, if the material resource is not accessed for a long time and is just accessed when the cache elimination algorithm is executed, because the cache position of the material resource is determined according to the temperature information of the material resource, and the temperature information of the material resource is determined according to the access probability of the material resource within a period of time, the temperature information of the material resource is not increased suddenly due to an accidental access event, and the situation that the material resource is moved to the head of a queue is avoided, so that the situation that the accidental access material resource occupies the cache position of the hot spot resource is avoided, and the cache hit rate in the accidental scene is improved. Therefore, the problem of cache pollution which is difficult to avoid when the LRU algorithm and other large cache elimination algorithms face sporadic scenes is solved, and the performance of the cache elimination algorithms is greatly optimized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a cache management method according to an embodiment of the present application;
fig. 2 is a flowchart of a cache management method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a buffer queue according to an embodiment of the present application;
fig. 4 is a schematic diagram of a buffer queue according to an embodiment of the present application;
fig. 5 is a schematic diagram of a buffer queue according to an embodiment of the present application;
fig. 6 is a schematic diagram of a buffer queue according to an embodiment of the present application;
fig. 7 is a schematic diagram of a buffer queue according to an embodiment of the present application;
fig. 8 is a flowchart of a cache management method according to an embodiment of the present application;
fig. 9 is a flowchart of a cache management method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a cache management apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present application generally indicates that the former and latter related objects are in an "or" relationship.
The term "plurality" in this application means two or more, e.g., a plurality of packets means two or more packets.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
Hereinafter, terms related to the present application are explained.
Access time: the point in time when the resource was last accessed.
Access frequency: number of accesses of a resource over a period of time.
Cache elimination: an algorithm that frees up cache space by removing a specified resource from the cache. Generally speaking, caching, as a precious storage resource, is usually provided with a capacity threshold (max memory), which may be determined by the sum of all key-value pairs that are cacheable, for example, 80% of the sum of all key-value pairs that are cacheable. When the occupancy rate of the cache reaches the capacity threshold, determining the resource to be deleted based on a cache elimination algorithm, and deleting the resource from the cache, so that the resource is eliminated from the cache, the occupancy rate of the cache after the cache elimination is reduced, and when the occupancy rate of the cache is lower than the capacity threshold, determining that the current idle cache is enough, and continuing to cache new resources. Currently, there are three main categories of cache eviction algorithms, the first category being cache eviction algorithms based on access time, such as LRU algorithm, TwoQueue algorithm; the second category is cache eviction algorithms based on access frequency, such as Least recently Used page replacement (LFU) algorithms and variants of LFU algorithms; the third category is cache eviction algorithms based on access time combined with access frequency, such as LRU-K algorithms, Least Recently Used and Frequently Used (LRFU) algorithms.
Hit rate: refers to the ratio between the number of resources that hit the cache and the total number of resources. The higher the hit rate, the less network traffic required and the faster the resource loading speed.
Cache pollution: this is a case where the hit rate is drastically decreased. The reason for causing cache pollution is that, due to unreasonable design of a cache elimination algorithm, a computer mistakenly stores data which is accessed infrequently from a memory or a hard disk into a cache, so that the data which is accessed frequently is eliminated from the cache, and the cache cannot be hit frequently when the data is accessed, thereby reducing the hit rate. Cache pollution can cause a drastic drop in the efficiency of the cache.
The LRU algorithm: the cache elimination algorithm is based on the recent access time of the data. The core idea of the LRU algorithm is as follows: if some data was accessed recently, then the probability of that data being accessed in the future is higher. There may be a variety of algorithm structures implementing the LRU algorithm, for example, any one of a linked list, an array, or a hash table, and combinations thereof may be employed. Taking the use of the linked list cache data as an example, firstly, each resource is sequentially inserted into the linked list according to the sequence of the access time points. For example, resource 1, resource 2, to resource 5 are stored in the cache, and if the access time point of resource 1 is the latest, or resource 1 is the most recently accessed resource, resource 1 will be located at the head of the linked list, i.e. the first position in the linked list. If resource 5 has the earliest access time point, or if resource 5 is the earliest resource to be accessed, resource 5 will be at the tail of the linked list, i.e., the last position in the linked list, and so on. If an access request for a cached resource is received, the resource in the linked list is moved from the historical position to the first position of the linked list, and other resources in the linked list are correspondingly adjusted according to the original sequence. If an access request for a new resource (i.e. an uncached resource) is received, the last resource in the linked list is cached and eliminated, each resource in the linked list is shifted backwards by one bit, and the new resource is inserted into the first position of the linked list. The hit rate of the LRU algorithm is reduced sharply in the periodic and sporadic data scenarios, and the hit rate in other scenarios is generally lower than that of other obsolete algorithms.
LRU-K algorithm, the optimized version of LRU algorithm, wherein K represents the most recent use times and is a positive integer. And starting the LRU algorithm process after the resource access times reach K times.
Temperature information model: a method for determining initial temperature information based on access probability characteristics of a resource, the temperature information being adjustable over an operating process.
The periodic scenario refers to a situation where a resource is accessed at intervals, and the resource in this scenario is a resource that needs to reside in a cache. However, when the computer uses the LRU algorithm to perform cache elimination, since the time point of executing the elimination algorithm is easily staggered from the access time point of the resource, other resources are accessed before the resource is periodically accessed, and therefore other resources are arranged in the linked list before the resource is periodically accessed, resulting in the resource being periodically accessed being eliminated. Then, after a period of time, when the computer accesses the cache again, the resource cannot be found because the resource is eliminated, and a cache miss occurs, resulting in a reduction in hit rate.
A sporadic scenario refers to a situation where a resource has not been accessed for a long period of time, but just accessed before cache eviction is performed. Resources in such a scenario are only accessed by chance and are therefore those that need to be eliminated. However, when the computer uses the LRU algorithm to perform cache elimination, since the sporadic access resource is shifted to the head of the linked list, the sporadic access resource which should be eliminated cannot be eliminated, but rather, the resource which is accessed more frequently than the sporadic access resource is eliminated due to the backward shift of the position, and the cache hit rate is reduced.
Expected (mean): the probability of each possible result is multiplied by the sum of the results, and the result is a statistical characteristic and reflects the average value of the random variable.
Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may be a scene of a pool match, which may include a table, a club, a ball, a player score, and the like. As another example, the virtual scene may be a simulated operations scene, and the virtual scene may include a factory, flowers, crops, current monetary value, and the like. For another example, the virtual scene may be a graffiti scene, and the virtual scene may include a drawing board, a drawing pen, and the like. As another example, the virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as desert, city, etc., and the user may control the virtual object to move in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a Player Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-Player Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb, bend over, and move on the land, or control a virtual object to swim, float, or dive in the sea, or the like, but the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above-mentioned scenes are merely exemplified, and the present invention is not limited to this. The user may also control the virtual object to interact with other virtual objects in a manner of fighting and the like through the virtual weapon, for example, the virtual weapon may be a throwing type virtual weapon such as a grenade, a cluster mine, a viscous grenade (abbreviated as "viscous grenade"), or a shooting type virtual weapon such as a machine gun, a pistol, a rifle, and the like, and the type of the virtual weapon is not specifically limited in the present application.
Hereinafter, an application scenario of the present application is exemplarily described.
The embodiment can be applied to scenes for caching material resources in virtual scenes. The virtual scene can be displayed based on the running of the application program, and the material resources of the virtual scene can be sourced from a background server of the application program. Wherein the application program may be a game application.
For example, the application may be a mini-game, which refers to an embedded game, i.e., a game that is loaded on another application to run. For example, a mini game can be loaded in a social application, the social application can provide an entry for the mini game, and after an operation is triggered to the entry, a jump can be made to an interface of the mini game, so that a game function can be provided through the interface of the mini game. For example, the mini-game may be a pool game, a simulated play game, an online graffiti game, a jump-and-jump game, a arcade game, a virtual pet game, and the like.
Of course, the application may not be a mini-game, but a game that is independent of other applications. For example, the application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online Battle Arena game (MOBA), a virtual reality application program, a three-dimensional map program, a military simulation program, or a Multiplayer gunfight type live game. The user may use the terminal to manipulate virtual objects located in the virtual scene for activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a first virtual persona, such as a simulated persona or an animated persona.
In the process of displaying the virtual scene through the application program, the electronic device needs to load material resources, display the material resources in the form of images, or play the material resources in the form of audio, so as to present the virtual scene.
For example, in the case of a billiards game, after the billiards game is started, the electronic device loads and displays various billiards related images such as a club image, a table image, a goal score image, and the like, and also loads and plays audio such as an opening prompt audio, a goal prompt audio, a victory prompt audio, and the like, which are all material resources for a computer, and the material resources are required to be loaded to present a scene of the billiards game. Of course, the above images and audios are only examples, the material resources may be any materials required for presenting the virtual scene, and the material resources may be images, audios, animations, texts, or any other data formats, which are not enumerated herein.
For the application program associated with the virtual scene, the material resources that need to be loaded are generally huge, however, the inclusion capacity of the application program is limited, the storage capacity and the memory capacity allocated to the application program are also limited, and in addition, the electronic device running the virtual scene is generally very sensitive to the performance overhead of the cache elimination algorithm of the material resources. For example, in the case of running a mini game through a mobile terminal, the performance overhead is very demanding as a program that is parasitic to other applications. When various cache elimination algorithms such as the LFU algorithm, the LRFU algorithm, the LRU-K algorithm and the like are used, the historical access records of the material resources are usually relied on, and the material resources need to be sequenced, so that the calculation amount of the algorithm is large, and the performance cost is large. In addition, the game habits of the game players have large differences, which is also not suitable for the case of cache elimination of the material resources of the game application for the cache elimination algorithm which selects parameters according to a single scene, such as the TwoQueue algorithm. However, although the LRU algorithm has the minimum performance cost and the widest application range, the LRU algorithm has the problem of cache pollution, and the hit rate is relatively low compared with other algorithms.
Through a great deal of research and analysis, the material resources of the game application have certain specificity relative to other types of resources. Specifically, in one aspect, the game assets are a known finite set of data, such as how many assets are included in the table game, which is a pre-determined statistical value. On the other hand, the access probability of each player of the game to a certain material resource belongs to the commonalities of the material resources. On the other hand, because the game habits of different players of the game are different, the access probabilities of different material resources of the game are different, and the cache elimination mechanism is required to be adapted to the personal habits of the players through a more scientific cache elimination algorithm in the face of the difference characteristic of the material resource access.
In view of this, the present embodiment provides an algorithm for performing cache elimination on material resources of a game based on a temperature information model, which can meet the above-mentioned requirements. Specifically, through the access behaviors of the players to the material resources, the access probability characteristics are mined by utilizing a big data technology, so that the commonalities of the players for accessing the material resources are found. For example, the access probability feature can be found by counting the expectation of the access probability of the material resources. Therefore, the access probability characteristics of the material resources can be used as prior data, the access probability characteristics of the material resources are converted into the initial temperature information of the material resources, and the initial temperature information is marked in the configuration file of the material resources, so that the known access probability characteristics are introduced into a cache elimination algorithm. In addition, the current temperature information of the material resources can be adjusted in real time along with the access request of the material resources in the process of running the virtual scene according to the experience habits of the players. By the temperature model combining the access probability characteristics and the adjusting mechanism, low-temperature material resources are preferentially eliminated when cache elimination is executed based on temperature, so that the cache pollution problem of the LRU algorithm is effectively solved, and the hit rate of each scene is superior to that of the LRU algorithm and the LRU-K algorithm. And the performance cost is close to that of the LRU algorithm, so that the method has great advantages in the aspects of both the performance cost and the hit rate. Through experiments, in a cache pollution scene, compared with the case that the hit rate of the LRU is close to 0, the cache elimination algorithm provided by the embodiment can keep a higher hit rate, which is better than the LRU-K (K < ═ 5) algorithm. Wherein, the hit rate can be different according to the actual ratio of the high-temperature material resources. In addition, under the test of other scenes, including but not limited to a high-heat data access scene, a random access scene, a 'pseudo' high-heat scene and the like, the hit rate is better than that of an LRU algorithm and an LRU-K (K < ═ 5) algorithm. After the method embodiment is operated at the client side of the mini-game to eliminate the cache, the daily average flow can be reduced by about 15% by testing the on-line flow.
Hereinafter, the system architecture of the present application is exemplarily described.
Fig. 1 is a schematic diagram of an implementation environment of a cache management method according to an embodiment of the present application. The implementation environment includes: a first terminal 120, a server 140, and a second terminal 160. The first terminal 120 and the second terminal 160 are connected to the server 140 through a wireless network or a wired network.
The first terminal 120 is installed and operated with an application program supporting a virtual scene. The first terminal 120 may be a terminal used by a first user who uses the first terminal 120 to operate a first virtual object in a virtual scene for an activity, and the second terminal 160 may also be installed and run with an application program supporting the virtual scene. The second user uses the second terminal 160 to manipulate the second virtual object in the virtual scene for activity.
Optionally, a first virtual object controlled by the first terminal 120 and a second virtual object controlled by the second terminal 160 are in the same virtual scene, and the first virtual object may interact with the second virtual object in the virtual scene. In some embodiments, the first virtual object and the second virtual object may be a hostile relationship. For example, the virtual scene may be a scene of a billiards game, the first virtual object and the second virtual object may be players in a billiards game, the first virtual object and the second virtual object may hit a ball on a table through a club, and the win or loss of the billiards game is determined by calculating a score. For another example, the virtual scene may be a scene of a shooting game, the first virtual object and the second virtual object may belong to different teams and organizations, and the virtual objects in the opponent relationship may interact with each other in a battle manner on the land in a manner of shooting each other.
Certainly, the interaction between the virtual objects controlled by the two terminals is only an example, and the first virtual object controlled by the first terminal 120 or the second virtual object controlled by the second terminal 160 may also interact with the NPC character generated by the computer, or the virtual objects controlled by the two terminals join the same group to interact with other virtual objects. In other embodiments, the first virtual object and the second virtual object may be in a teammate relationship, for example, the first virtual character and the second virtual character may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
The server 140 may include at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. The server 140 is used to provide background services for applications that support virtual scenarios. Alternatively, the server 140 may undertake primary computational tasks and the first and second terminals 120, 160 may undertake secondary computational tasks; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III), an MP4(Moving Picture Experts Group Audio Layer IV), a laptop portable computer, and a desktop computer. For example, the first terminal 120 and the second terminal 160 may be smart phones, or other handheld portable gaming devices. Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a flowchart of a cache management method according to an embodiment of the present application. The execution subject of the embodiment of the invention is an electronic device, and referring to fig. 2, the method includes:
201. the electronic equipment acquires temperature information of material resources in the virtual scene.
The temperature information indicates the probability of the material resource being accessed in the virtual scene. For example, if the temperature information is larger, it indicates that the probability that the material resource is accessed in the virtual scene is higher. The temperature information may also be referred to as heat information or heat parameters. The temperature information may range from 0 to 100. If the temperature information of a certain material resource is 100, the temperature information reaches the maximum value of the temperature information, which indicates that the material resource is the highest-temperature material resource in all the material resources of the virtual scene, i.e., the resource with the highest access probability in the virtual scene. If the temperature information of a certain material resource is 0, the temperature information reaches the minimum value of the temperature information, and the material resource can be the coldest material resource in all the material resources of the virtual scene, namely the resource with the lowest access probability in the virtual scene. Of course, 0 to 100 are only examples of the value range of the temperature information, the value range of the temperature information may be configured according to experiments, experiences, or requirements, and the specific value range of the temperature information is not limited in this embodiment.
By acquiring the temperature information, whether the material resources are hot data or cold data can be judged by utilizing the temperature information, so that whether the material resources are continuously resided in the cache or eliminated from the cache is judged according to the cold and heat of the material resources, the cache queue is ensured to cache the hot data as much as possible, the cold data is cached less, and the cache efficiency is improved. For example, the greater the temperature information of the material resource is, the higher the heat of the material resource is, the material resource may be preferentially cached as hot spot data, and since it is likely that the cache needs to be accessed again subsequently to acquire the material resource, the CPU may obtain the material resource by accessing the cache by residing the material resource in the cache, thereby utilizing the advantage of cache access of the cache and greatly increasing the speed of loading the material resource. On the contrary, the smaller the temperature information of the material resource is, i.e. the lower the heat of the material resource is, the material resource can be preferentially eliminated as cold data, so that the cache space occupied by the material resource is released, and more cache spaces are reserved for caching the hot spot resource.
The temperature information may be acquired in a variety of manners, and the following description is given by way of example of the first acquisition manner and the second acquisition manner. For the purpose of distinguishing description, the temperature information is referred to as first temperature information when describing the acquisition mode one, and the temperature information is referred to as second temperature information when describing the acquisition mode two.
The first obtaining method is that the electronic equipment reads first temperature information from a configuration file of material resources.
The configuration file may be a manifest file for the resource, which may be in the form of an index list. The configuration file may be stored in the electronic device in advance. For example, the electronic device may be a client of a virtual scene, the client may request a material resource from a background server of the virtual scene in the process of loading the virtual scene, the background server responds to the request for the material resource and returns the material resource and the configuration file to the client together, and the client may receive the material resource and the configuration file sent by the background server and store the material resource and the configuration file in association with each other.
The first temperature information may be obtained by a background server of the virtual scene by counting requests of the clients of the virtual scene using big data technology and statistics. Hereinafter, the calculation process of the first temperature information will be exemplified by the following steps one to one.
Step one, the server acquires the access probability characteristics of the material resources according to the access request set of at least one client of the virtual scene.
The access request set comprises access requests of each client in at least one client of the virtual scene, and the access request set comprises access requests of the material resource and access requests of other material resources in the virtual scene. For example, if the virtual scene includes N material resources, and M clients that have requested the material resources of the virtual scene have M, the access request set may be a set formed by access requests of any client in the M clients to any material resource in the N material resources, where N and M are positive integers. The manner in which the set of access requests is determined may include a variety of ways. For example, each time any client of the virtual scene is to load any material resource of the virtual scene, the client may send an access request to the server, and the server may further record the access request on the basis of returning the material resource in response to the access request, for example, store the access request in a database in the form of a log.
The access probability feature may include one or more dimensions. In some embodiments, the access probability characteristic may include an expectation of access probability. The expectation of the access probability may indicate the magnitude of the average value of the access probabilities, so that the commonality of the possibility that each client of the virtual scene accesses the virtual resource can be indicated through the expectation of the access probabilities. As an example, the desired calculation process of the access probability may comprise the following steps 1.1 to 1.2:
step 1.1, the server obtains the first times and the second times according to the access request set.
The first time is the total number of times of the material resources being accessed, and the second time is the sum of the total number of times of the material resources and other material resources being accessed in the virtual scene. For example, the virtual scene includes N material resources, which are respectively material resource 1, material resource 2 to material resource N, and M clients that have historically requested the material resources of the virtual scene are respectively client 1, client 2 to client N. When the access probability characteristics of the material resources X in the N material resources are counted, the first frequency is the total number of times that the material resources X are accessed, that is, the sum of the number of times that the client 1 accesses the material resources X, the number of times that the client 2 accesses the material resources X, and the number of times that the client N accesses the material resources X. The second frequency is the sum of the total number of times that the N material resources are accessed, that is, the number of times that the client 1 accesses the material resource 1, the number of times that the client 2 accesses the material resource 2, and the number of times that the client N accesses the material resource N are … ….
In an exemplary application scenario, in a table game, after any client of the table game starts the table game, each time a material resource of the table game is loaded, the client reports information of the material resource accessed this time to a background server of the table game, if the total times of reporting all the material resources by all the clients of the table game are counted to be Q times within a period of time, wherein the times of reporting the material resource X is counted to be P times, the first times is counted to be P times, and the second times is counted to be Q times, wherein P and Q are positive integers.
And step 1.2, the server obtains the expectation of the access probability according to the first times and the second times.
Wherein the expectation of the access probability may be a ratio between the first number and the second number. For example, if the first number of times of the material resource X is P times and the second number of times is Q times, the expectation of the access probability of the material resource X may be a ratio of P to Q.
It should be understood that the desire to select an access probability as the access probability feature is merely illustrative, and in other embodiments, features of other dimensions may be selected as the access probability feature. In a possible implementation, different weights may be assigned to access events at different time points, if an access time point of a resource is later, a first weight may be assigned to an access event triggered by the access time point, and if the access time point of the resource is earlier, a second weight may be assigned to the access event triggered by the access time point, where the first weight is higher than the second weight, so as to ensure that, during statistics, since the weight of an accessed material resource is higher than a material resource accessed in a long term, an access probability characteristic of the recently accessed material resource is higher than that of the material resource accessed in the long term, thereby ensuring that first temperature information of the recently accessed material resource is higher than that of the material resource accessed in the long term, and thus ensuring timeliness of the temperature information.
And step two, the server acquires first temperature information of the material resources according to the access probability characteristics of the material resources.
The server may establish a function between the access probability feature and the first temperature information, an independent variable of the function is the access probability feature, a dependent variable of the function is the first temperature information, and the access probability feature may be calculated by using the function to obtain the first temperature information. In this way, the access probability characteristic may be scaled to the first temperature information.
In some embodiments, the function between the access probability characteristic and the first temperature information may be a piecewise function. Specifically, the value range of the access probability characteristic may be divided into a plurality of sections, the value range of the first temperature information may also be divided into a plurality of sections, and each section of the access probability characteristic may be mapped to each section of the first temperature information by the piecewise function. For example, if the highest interval of the value range of the first temperature information is (80,100) and the highest interval of the access probability feature is (0.15, 1), the function may include t ═ min (100,80+ p1 × 100), and the function may map the highest interval of the access probability feature to the highest interval of the first temperature information. Wherein t represents the first temperature information, p1 represents the access probability characteristic, min represents the minimum value, and x represents the multiplication. For example, the range of p1 can be p1> -0.15, and t can be 80< ═ t < (100).
After the server calculates the first temperature information, the server may write the first temperature information into a configuration file of the material resource, so as to transmit the first temperature information to the client through the configuration file in the following.
And in the second acquisition mode, the electronic equipment updates the historical temperature information of the material resource according to the access request of the material resource to obtain second temperature information.
The second temperature information is larger than the historical temperature information, and the second temperature information can be obtained by adjusting the historical temperature information under the trigger of the situation that the cached material resource is repeatedly accessed. By raising the temperature of the history of the material resources according to the access request of the material resources, the temperature information of the material resources can be ensured to be improved along with the increase of the access times, so that the probability of the material resources being accessed recently can be more accurately expressed.
For example, the acquiring process of the second temperature information may be: and setting a temperature-increasing step length, and if the material resource is accessed, acquiring second temperature information according to the historical temperature information and the temperature-increasing step length, wherein the second temperature information is the sum of the historical temperature information and the temperature-increasing step length. For example, if the historical temperature information is t1 and the temperature increment step is q, the second temperature information is t + q, where t1 may range from 0 to 100 and q may be a positive number. The temperature increase step size can be preset. Of course, updating the historical temperature information by the temperature increase step is merely an example, and in other embodiments, the historical temperature information may be non-linearly mapped to obtain the second temperature information.
The access request for the material resource is used for requesting to acquire the material resource. The electronic device may receive an access request for a material resource during the display of the virtual scene. The access request of the material resource may be triggered by an operation performed by the user on a control in the virtual scene, for example, if the user clicks a ball hitting option and performs a ball hitting operation, the access request of the ball hitting animation may be triggered. In addition, the access request of the material resource may be triggered not by an operation but automatically by some event associated with the virtual scene. For example, when the virtual scene is started, an access request for the material resources in the virtual scene may be automatically received, and for example, when the game progress of the virtual scene meets a condition, such as a player wins, an access request for the material resources in the virtual scene is automatically triggered.
In this way, it is considered that different users have different usage habits on the game application, and therefore when terminals of different users operate clients of a virtual scene, the access probability to the same material resource also varies, for example, the access probability of the terminal of the user a to the material resource a is higher than that of the terminal of the user B to the material resource a, and the access probability of the terminal of the user a to the material resource B is lower than that of the terminal of the user B to the material resource B. By the method, the temperature information obtained after the electronic equipment is updated can be ensured to accord with the access rule of the local terminal of the electronic equipment to the material resources. For example, if the electronic device is a terminal of the user a, the terminal of the user a will continuously increase the temperature information of the material resource a through an access request to the material resource a, so as to ensure that the temperature information of the material resource a obtained by the terminal of the user a is higher than the temperature information of the material resource a obtained by the terminal of the user B. Therefore, the temperature information of the material resources can be ensured to accord with the personal use effect of the user, and the method is more accurate.
In some embodiments, the first and second acquisition modes may be adapted to different situations. For example, the first obtaining manner may be suitable for a case where the material resource is accessed for the first time, and the second obtaining manner may be suitable for a case where the material resource is accessed for the second time. The first time the material resource is accessed means that the material resource is not currently in the cache and needs to be loaded from a local storage medium or a remote end, and in this case, the first temperature information may be initial temperature information of the resource. The material resource is accessed again, namely the material resource is already in the cache and can be read by accessing the cache. In this case, the second temperature information may be temperature information after the resource is warmed up.
The first acquisition mode and the second acquisition mode can be combined, and the corresponding acquisition mode is executed according to the current situation of the material resources. For example, when a game is running, if a material resource a accessed for the first time is to be acquired, the temperature information is acquired through a configuration file of the material resource a, and if a material resource B existing in the cache is to be acquired, the temperature information of the material resource B is adaptively adjusted by applying a specific strategy according to the experience habits of a player when the game is running.
The algorithm capable of obtaining the temperature information can be provided as a temperature model, the temperature model can determine an initial value of the temperature information based on the first obtaining mode, and the temperature model can perform adaptive adjustment on the temperature information based on the second obtaining mode.
202. And the electronic equipment determines the target cache position of the material resource according to the temperature information.
203. The electronic equipment caches the material resources to a target cache position in the cache queue.
The target cache position refers to a position where the material resource is to be cached. The buffer queue is a queue for buffering each material resource in the virtual scene, and the target buffer position may be located in the buffer queue. The temperature information of the material resource and the target cache position of the material resource may be associated with each other, for example, the larger the temperature information is, the more forward the target cache position is. That is, if the temperature information of the material resource is larger, the material resource is located at the head of the cache queue, and if the temperature information of the material resource is smaller, the material resource is located at the tail of the cache queue. Then the buffer queue will buffer the hot resource to the cold resource in sequence from the head to the tail. For example, referring to fig. 3, fig. 3 is a schematic diagram of a buffer queue, in which a material resource with hot temperature information is buffered at the head of the buffer queue, and a material resource with cold temperature information is buffered at the tail of the buffer queue.
The implementation manner of caching the material resources may include multiple, which is exemplified by the first implementation manner and the second implementation manner.
According to the first implementation mode, the electronic equipment inserts the first material resource into a target cache position in a cache queue.
When the material resources are accessed, whether the material resources to be accessed are included in the cache queue or not can be inquired, and if the cache queue does not include the material resources to be accessed, the material resources are the first material resources, namely the material resources which are not cached yet. At this time, the first material resource may be acquired from a storage medium other than the local cache or from the server, and a target cache location matching the temperature information is found in the cache queue, and the first material resource is inserted into the target cache location. For example, referring to FIG. 4, a schematic diagram of the insertion of an initially accessed resource into a buffer queue is shown.
In addition, referring to fig. 4, the last material resource cached in the cache queue may be cache-eliminated, so as to release the storage space occupied by one material resource, so as to leave space for inserting the first material resource.
And the electronic equipment adjusts the second pixel resource from the historical cache position in the cache queue to the target cache position.
When the material resources are accessed, whether the material resources to be accessed are included in the cache queue or not can be inquired, and if the material resources to be accessed are included in the cache queue, the material resources are indicated to be the second material resources, namely the previously cached material resources. At this time, the temperature information of the second material resource may be adjusted, a target cache location matching the adjusted temperature information is found in the cache queue, and the cache location of the second cache resource is adjusted from the original location to the target cache location. For example, see FIG. 5, which shows a schematic diagram of inserting a re-accessed resource into a cache queue.
In some embodiments, the buffer queue may include a plurality of intervals, each interval corresponds to a value range, each interval may be referred to as a temperature zone, each interval is used to buffer material resources whose temperature information is in the corresponding value range, and resources buffered in different intervals may have temperature information in different value ranges. The number of intervals included in the buffer queue may be set according to requirements, for example, the buffer queue may be divided into n intervals, where 3< ═ n < ═ 5. The client of the virtual scene can divide the buffer queue into a plurality of intervals, and the value range of each interval in the buffer queue is determined.
Illustratively, the buffer queue may include five intervals, which are a first interval, a second interval, a third interval, a fourth interval, and a fifth interval, respectively. Referring to fig. 3, the first interval may be referred to as a hot temperature zone, and the value range of the temperature information corresponding to the first interval is 80 to 100. The temperature information corresponding to the second interval can be recorded as a high-temperature zone, the temperature information corresponding to the second interval has a value ranging from 60 to 80, the temperature information corresponding to the third interval has a value ranging from 40 to 60, the temperature information corresponding to the fourth interval can be recorded as a low-temperature zone, the temperature information corresponding to the fourth interval has a value ranging from 20 to 40, the temperature information corresponding to the fifth interval can be recorded as a cold-temperature zone, and the temperature information corresponding to the fifth interval has a value ranging from 0 to 20.
In some embodiments, the buffer queue may include a plurality of LRU queues, one LRU queue per bay of the buffer queue. In this way, the LRU algorithm can be used as an improved basis, and a plurality of LRU queues are combined into a whole to cache the material resources. Because the LRU algorithm has the lowest performance cost in each large cache elimination algorithm, the performance cost of the cache queue can be saved. In other embodiments, the buffer queue may also include a plurality of LRU-K queues, one LRU-K queue for each section of the buffer queue; in other embodiments, the buffer queue may also include a plurality of LFU queues, and each section of the buffer queue is an LFU queue. Of course, the buffer queue may also be a mixture of different types of queues, for example, the buffer queue may include at least one LRU queue and at least one LFU queue, one LRU queue or one LFU queue per interval. Of course, these manners are merely examples, and the present embodiment does not specifically limit the queue type of each interval.
If the cache queue is divided into a plurality of intervals, the target cache position of the material resource can be found from the cache queue through the following steps from the first step to the second step.
Step one, the electronic equipment determines a target interval from a plurality of intervals.
And the target interval is an interval corresponding to the value range to which the temperature information of the material resource belongs. For example, referring to fig. 3, if the temperature information of the material resource is 90, the target section is a hot temperature zone, and if the temperature information of the material resource is 75, the target section is a high temperature zone. The electronic device can compare the temperature information of the material resource with the endpoint of each interval, so as to determine the target interval.
And step two, the electronic equipment determines the target cache position from the target interval.
In some embodiments, the first location of the target interval may be determined as the target cache location. For example, referring to fig. 4, if the temperature information of the resource in the configuration file is found to be 50 when the resource is accessed for the first time, it is determined that the temperature zone interval corresponding to the temperature information is the normal temperature zone, and the resource is inserted into the head of the normal temperature zone. Referring to fig. 5, if the existing resource is accessed again, after the historical temperature information of the resource is heated, if the heated temperature information falls into the high-temperature zone, the cache position of the resource is adjusted to the head of the high-temperature zone; and if the temperature information after temperature rise falls into the normal temperature zone, adjusting the cache position of the resource to the head of the normal temperature zone.
In this way, after the temperature of the material resource is raised, if the temperature information is matched with the temperature zone where the material resource is currently located, the material resource is moved from the current position of the temperature zone to the head of the temperature zone according to the position adjustment rule of the LRU algorithm; and if the temperature information is not matched with the temperature zone where the material resource is currently located, the material resource is adjusted to a higher temperature zone from the current position of the temperature zone, and the material resource is inserted into the head of the higher temperature zone according to the position adjustment rule of the LRU algorithm. Alternatively, the temperature increase step may be configured to be less than or equal to the span of one temperature zone, for example, if the size of one temperature zone is 20, the temperature increase step may be configured to be less than or equal to 20. By the method, the temperature zone can be adjusted up at most when the cache position of the material resource is adjusted up each time, and the material resource is not adjusted up across the temperature zone.
In other embodiments, the target buffer location may be a buffer location other than the first location of the target interval. For example, the temperature information of the material resource may be compared with the temperature information of the material resource that has been cached in the target interval, so as to obtain the target cache position. The cached material resources in the target interval can be arranged in the order of the temperature information from large to small. For example, the first material resource in the target interval may be the material resource with the largest temperature information in the target interval.
204. And the electronic equipment caches and eliminates the cached material resources of the cache queue according to the sequence of the positions from back to front.
The cache elimination is carried out according to the sequence of the positions from back to front, the elimination priority is higher the more the material resource is positioned in the cache queue, and the elimination priority is lower the more the material resource is positioned in the cache queue. Therefore, material resources with small temperature information can be preferentially eliminated, and then material resources with large temperature information are eliminated, so that cold resources are firstly eliminated and are timely eliminated from the cache, and hot resources are eliminated, so that the residence time of the hot resources in the cache is prolonged, and the hit rate of accessing the hot resources is improved. Experiments prove that the method can effectively relieve the cache pollution problem of the LRU algorithm, the hit rate of each scene is superior to the LRU algorithm and the LRU-K algorithm (K < ═ 5), the performance cost is equal to the LRU algorithm, and the performance cost is avoided to be overlarge.
The electronic device can execute the step of cache elimination under the trigger of one or more conditions. For example, the step of cache eviction may be executed when the capacity of the cache queue reaches the capacity threshold, or the step of cache eviction may also be executed under the trigger of the cache cleaning instruction.
In some embodiments, if the buffer queue is divided into a plurality of sections, the process of buffer eviction may include the following steps one to two:
step one, the electronic equipment accesses the last interval in the buffer queue.
And step two, the electronic equipment caches and eliminates the cached material resources in the last interval.
For example, referring to fig. 6, if the last interval in the buffer queue is a cold temperate zone, the cold temperate zone may be accessed, and the material resources in the cold temperate zone are subjected to buffer elimination. In this way, the low-temperature temperate zone resources in the buffer queue can be preferentially eliminated. In the process of cache elimination of the last interval, the material resources of the last interval can be eliminated in sequence from the tail of the last interval according to the elimination rule of the LRU algorithm and from the back to the front of the position.
In some embodiments, if the material resources are not cached in the last interval, the electronic device continues to access other intervals before the last interval in the order from the last interval to the first interval, and cache and eliminate the material resources in the other intervals.
For example, referring to fig. 6, if there is no material resource in the cold temperature zone, the low temperature zone may be continuously accessed, and the material resource in the low temperature zone is cached and eliminated; similarly, if no material resource exists in the low-temperature zone, the normal-temperature zone can be continuously accessed, and the material resource in the normal-temperature zone is cached and eliminated. In addition, if the current accessed temperate zone has cached resources, but the capacity of the cache queue still does not meet the condition after each resource of the current temperate zone is eliminated, other temperate zones before the current temperate zone can be continuously accessed, and therefore elimination is gradually carried out towards higher temperate zones.
In some embodiments, the electronic device may monitor the number of the buffered material resources of the first interval in the buffer queue, compare the number of the buffered material resources of the first interval with a threshold, and adjust each buffered material resource of the buffer queue from the current interval to the next interval if the number of the buffered material resources of the first interval reaches the threshold. For example, when the cached high-temperature resources in the cache queue reach the limit, the temperature band in which all the cached material resources are located is adjusted downwards by one temperature band. Referring to fig. 7, schematically, if the number of the material resources corresponding to the hot temperature zone reaches a threshold value, the material resources corresponding to the hot temperature zone are adjusted from the hot temperature zone to a high temperature zone, the material resources corresponding to the high temperature zone are adjusted from the high temperature zone to a normal temperature zone, the material resources corresponding to the normal temperature zone are adjusted from the normal temperature zone to a low temperature zone, and the material resources corresponding to the low temperature zone are adjusted from the low temperature zone to a cold temperature zone. Therefore, the cached material resources can be integrally cooled.
Because the number of accessed resources is usually much larger than the capacity of the cache queue in the process of running the virtual scene by the client, if the capacity of the highest temperature zone is not limited, all material resources cached by the cache queue are converted into high-temperature resources, and further the cache queue is degraded into a common LRU queue. And the number of the material resources cached in the first interval is limited, so that the number of the material resources cached in the highest temperature zone can be ensured to be in a certain range, and the cached material resources are distributed in a plurality of temperature zones.
Referring to fig. 8, a preparation may be made in advance for the cache eviction process before the cache eviction process is performed. The flow of the preparation work may be as follows in sequence:
(1) and acquiring the access probability expectation of the material resources.
(2) And configuring initial temperature information of the material resources.
(3) And the buffer queue divides the temperature zone and determines the temperature information interval of the temperature zone.
Referring to fig. 9, the work flow of the cache eviction process may be as shown in (1) to (9) below:
(1) and when receiving an access request for the material resource, judging whether the material resource exists in the cache, if the material resource exists in the cache, executing (2), and if the material resource does not exist in the cache, executing (7).
(2) And (3) heating the material resources, and executing.
(3) And (4) judging whether the temperature information of the material resource after being heated is matched with the current temperature zone, if the temperature information after being heated is not matched with the current temperature zone, executing (4), and if the temperature information after being heated is matched with the current temperature zone, executing (9).
(4) And (5) adjusting the material resources from the current temperature information to the temperature zone at the front, and executing.
(5) And (6) judging whether the high-temperature resource exceeds the limit, and if so, executing (6).
(6) And (4) integrally cooling the material resources and regulating the temperature zone.
(7) And judging whether cache elimination is required to be executed or not, if so, executing (8), and if not, executing (9).
(8) Eliminating low-temperature resources.
(9) And if the material resources are the resources accessed for the first time, inserting the material resources into the head of the current temperate zone according to the LRU rule.
The embodiment provides a method for caching and eliminating material resources of a virtual scene based on temperature information, wherein the temperature information is adopted to represent the probability of the material resources being accessed in the virtual scene, the material resources are cached by using positions corresponding to the temperature information in a cache queue, and the cache queue is cached and eliminated according to the sequence of the positions from back to front. In this way, the higher the probability that the material resource is accessed, the higher the temperature information of the material resource is, and the later the elimination time point of the material resource is. That is, the cold resources of the virtual scene are eliminated first, and the hot resources of the virtual scene are eliminated later. Therefore, on one hand, by prolonging the residence time of the hot resource in the cache, the probability of cache hit is improved when the hot resource is accessed, so that the cache hit rate is improved, and the problem of cache pollution is solved. On the other hand, the cold resources in the cache are cleared as early as possible, so that the cache space is saved.
Particularly, in a scene of periodic access, if the material resources are accessed periodically, the temperature information of the material resources is maintained at a higher level, the material resources are arranged at a front position in the cache queue, and the elimination priority of the material resources is lower, so that the material resources are not easily eliminated, thereby prolonging the residence time of the material resources in the cache and improving the hit rate of the cache in the periodic scene. Therefore, the problem of cache pollution which is difficult to avoid when the LRU algorithm and other large cache elimination algorithms face a periodic scene is solved, and the performance of the cache elimination algorithms is greatly optimized.
In addition, in an accidental access scene, if the material resource is not accessed for a long time and is just accessed when the cache elimination algorithm is executed, because the cache position of the material resource is determined according to the temperature information of the material resource, and the temperature information of the material resource is determined according to the access probability of the material resource within a period of time, the temperature information of the material resource is not increased suddenly due to an accidental access event, and the situation that the material resource is moved to the head of a queue is avoided, so that the situation that the accidental access material resource occupies the cache position of the hot spot resource is avoided, and the cache hit rate in the accidental scene is improved. Therefore, the problem of cache pollution which is difficult to avoid when the LRU algorithm and other large cache elimination algorithms face sporadic scenes is solved, and the performance of the cache elimination algorithms is greatly optimized.
Fig. 10 is a schematic structural diagram of a cache management apparatus according to an embodiment of the present application. Referring to fig. 10, the apparatus includes:
the determining module 1001 is configured to determine a target cache position of a material resource according to temperature information of the material resource in a virtual scene, where the temperature information indicates a probability that the material resource is accessed in the virtual scene, and the larger the temperature information is, the closer the target cache position is;
the caching module 1002 is configured to cache the material resources to a target caching position in a caching queue;
the elimination module 1003 is configured to perform cache elimination on the cached material resources in the cache queue according to the sequence from the back to the front of the positions.
Optionally, the buffer queue includes multiple intervals, each interval corresponds to a temperature range, and the determining module 1001 is configured to determine a target interval from the multiple intervals, where the target interval is an interval corresponding to a value range to which the temperature information belongs; and determining a target buffer position from the target interval.
Optionally, the determining module 1001 is configured to determine a first position of the target interval as a target cache position; or comparing the temperature information of the material resources with the temperature information of the material resources cached in the target interval to obtain a target caching position, wherein the material resources cached in the target interval are arranged in the order of the temperature information from large to small.
Optionally, the eliminating module 1003 is configured to access a last interval in the buffer queue; and caching and eliminating the cached material resources in the last interval.
Optionally, the eliminating module 1003 is further configured to, if the material resource is not cached in the last interval, continue to access other intervals before the last interval according to an order from the last interval to the first interval; and caching and eliminating the material resources in other intervals.
Optionally, the apparatus further comprises:
the monitoring module is used for monitoring the number of the cached material resources in the first interval in the cache queue;
and the adjusting module is used for adjusting each cached material resource of the cache queue from the current interval to the next interval if the number of the cached material resources of the first interval reaches a threshold value.
Optionally, the material resources include a first material resource, where the first material resource is a material resource that has not been cached in the cache queue, and the caching module 1002 is configured to insert the first material resource into a target cache position in the cache queue.
Optionally, the material resources include second material resources, where the second material resources are material resources that have been cached by the cache queue, and the cache module 1002 is configured to adjust the second material resources from a historical cache position in the cache queue to a target cache position.
Optionally, the temperature information includes first temperature information, and the apparatus further includes: and the reading module is used for reading the first temperature information from the configuration file of the material resource.
Optionally, the obtaining of the first temperature information includes:
acquiring access probability characteristics of material resources according to an access request set of at least one client of a virtual scene, wherein the access request set comprises access requests for the material resources and access requests for other material resources in the virtual scene;
and acquiring first temperature information of the material resources according to the access probability characteristics of the material resources.
Optionally, the access probability feature includes an expectation of an access probability, and the expectation of an access probability acquiring process includes:
acquiring a first time and a second time according to the access request set, wherein the first time is the total number of times of the material resources being accessed, and the second time is the sum of the total number of times of the material resources and other material resources being accessed in the virtual scene;
and acquiring the expectation of the access probability according to the first times and the second times, wherein the expectation of the access probability is the ratio of the first times to the second times.
Optionally, the temperature information includes second temperature information, and the apparatus further includes: and the updating module is used for updating the historical temperature information of the material resource according to the access request of the material resource to obtain second temperature information, and the second temperature information is greater than the historical temperature information.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the above embodiment, when the cache management apparatus manages the cache, only the division of the functional modules is described as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the cache management apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the embodiments of the cache management apparatus and the cache management method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in the embodiments of the methods for details, which are not described herein again.
The electronic device in the foregoing method embodiment may be implemented as a terminal, for example, fig. 11 shows a block diagram of a terminal 1100 provided in an exemplary embodiment of the present application. The terminal 1100 may be: a smart phone, a tablet computer, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4) player, a notebook computer or a desktop computer. Terminal 1100 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1100 includes: one or more processors 1101 and one or more memories 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1101 may also include a main processor and a coprocessor, the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1101 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one instruction for execution by processor 1101 to implement the cache management methods provided by the method embodiments of the present application.
In some embodiments, the terminal 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a bus or signal lines. Various peripheral devices may be connected to the peripheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, touch screen display 1105, camera assembly 1106, audio circuitry 1107, positioning assembly 1108, and power supply 1109.
The peripheral interface 1103 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1101 and the memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1101, the memory 1102 and the peripheral device interface 1103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1104 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1105 is a touch display screen, the display screen 1105 also has the ability to capture touch signals on or over the surface of the display screen 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this point, the display screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1105 may be one, providing the front panel of terminal 1100; in other embodiments, the display screens 1105 can be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in still other embodiments, display 1105 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1100. Even further, the display screen 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 1106 is used to capture images or video. Optionally, camera assembly 1106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing or inputting the electric signals to the radio frequency circuit 1104 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1107 may also include a headphone jack.
Positioning component 1108 is used to locate the current geographic position of terminal 1100 for purposes of navigation or LBS (location based Service). The positioning component 1108 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1109 is configured to provide power to various components within terminal 1100. The power supply 1109 may be alternating current, direct current, disposable or rechargeable. When the power supply 1109 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1100 can also include one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
Acceleration sensor 1111 may detect acceleration levels in three coordinate axes of a coordinate system established with terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1101 may control the touch display screen 1105 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may cooperate with the acceleration sensor 1111 to acquire a 3D motion of the user with respect to the terminal 1100. From the data collected by gyroscope sensor 1112, processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1113 may be disposed on a side bezel of terminal 1100 and/or on an underlying layer of touch display screen 1105. When the pressure sensor 1113 is disposed on the side frame of the terminal 1100, the holding signal of the terminal 1100 from the user can be detected, and the processor 1101 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the touch display screen 1105, the processor 1101 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and the processor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1101 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1114 may be disposed on the front, back, or side of terminal 1100. When a physical button or vendor Logo is provided on the terminal 1100, the fingerprint sensor 1114 may be integrated with the physical button or vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the touch display screen 1105 based on the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1105 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1105 is turned down. In another embodiment, processor 1101 may also dynamically adjust the shooting parameters of camera assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
Proximity sensor 1116, also referred to as a distance sensor, is typically disposed on a front panel of terminal 1100. Proximity sensor 1116 is used to capture the distance between the user and the front face of terminal 1100. In one embodiment, the touch display screen 1105 is controlled by the processor 1101 to switch from a bright screen state to a dark screen state when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 is gradually decreasing; when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 becomes gradually larger, the touch display screen 1105 is controlled by the processor 1101 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 does not constitute a limitation of terminal 1100, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
The electronic device in the foregoing method embodiment may be implemented as a server, for example, fig. 12 is a schematic structural diagram of a server provided in this embodiment, where the server 1200 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1201 and one or more memories 1202, where at least one program code is stored in the memory 1202, and the at least one program code is loaded and executed by the processors 1201 to implement the cache management method provided in each of the foregoing method embodiments. Of course, the server may also have a wired or wireless network interface, an input/output interface, and other components to facilitate input and output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer readable storage medium, such as a memory including at least one program code, the at least one program code being executable by a processor to perform the cache management method of the above embodiments, is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should be understood that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The present application is intended to cover various modifications, alternatives, and equivalents, which may be included within the spirit and scope of the present application.

Claims (15)

1. A method for cache management, the method comprising:
determining a target cache position of a material resource according to temperature information of the material resource in a virtual scene, wherein the temperature information represents the probability that the material resource is accessed in the virtual scene, and the larger the temperature information is, the more the target cache position is;
caching the material resources to the target cache position in a cache queue;
and caching and eliminating the cached material resources of the cache queue according to the sequence of the positions from back to front.
2. The method of claim 1, wherein the buffer queue comprises a plurality of intervals, each interval corresponds to a temperature range, and the determining the target buffer position of the material resource according to the temperature information of the material resource in the virtual scene comprises:
determining a target interval from the plurality of intervals, wherein the target interval is an interval corresponding to a value range to which the temperature information belongs;
and determining the target cache position from the target interval.
3. The method of claim 2, wherein the determining the target buffer location from the target interval comprises:
determining a first position of the target interval as the target cache position; alternatively, the first and second electrodes may be,
and comparing the temperature information of the material resources with the temperature information of the material resources cached in the target interval to obtain the target caching position, wherein the material resources cached in the target interval are arranged in the descending order of the temperature information.
4. The method of claim 2, wherein the cache-eliminating the cached material resources of the cache queue according to the sequence of the positions from back to front comprises:
accessing a last interval in the buffer queue;
and caching and eliminating the cached material resources in the last interval.
5. The method of claim 4, wherein after the accessing the last interval in the buffer queue, the method further comprises:
if the last interval does not cache the material resources, continuing to access other intervals before the last interval according to the sequence from the last interval to the first interval;
and caching and eliminating the material resources in the other intervals.
6. The method of claim 2, further comprising:
monitoring the number of the cached material resources in the first interval in the cache queue;
and if the number of the cached material resources in the first interval reaches a threshold value, adjusting each material resource cached in the cache queue from the current interval to the next interval.
7. The method of claim 1, wherein the material resource comprises a first material resource, the first material resource is a material resource that is not already cached in the cache queue, and the caching the material resource to the target cache location in the cache queue comprises:
and inserting the first material resource into the target cache position in the cache queue.
8. The method of claim 1, wherein the material resource comprises a second material resource, the second material resource being a material resource that has been cached by the cache queue, the caching the material resource to the target cache location in the cache queue comprising:
and adjusting the second pixel resource from the historical cache position in the cache queue to the target cache position.
9. The method according to claim 1, wherein the temperature information includes first temperature information, and before determining the target buffer location of the material resource according to the temperature information of the material resource in the virtual scene, the method further comprises:
and reading the first temperature information from the configuration file of the material resource.
10. The method of claim 9, wherein the obtaining of the first temperature information comprises:
acquiring access probability characteristics of the material resources according to an access request set of at least one client of the virtual scene, wherein the access request set comprises access requests for the material resources and access requests for other material resources in the virtual scene;
and acquiring first temperature information of the material resources according to the access probability characteristics of the material resources.
11. The method of claim 10, wherein the access probability feature comprises an expectation of access probability, and wherein the expectation of access probability obtaining comprises:
acquiring a first time and a second time according to the access request set, wherein the first time is the total number of times that the material resource is accessed, and the second time is the sum of the total number of times that the material resource and other material resources in the virtual scene are accessed;
and acquiring the expectation of the access probability according to the first times and the second times, wherein the expectation of the access probability is the ratio of the first times to the second times.
12. The method according to claim 1, wherein the temperature information includes second temperature information, and before determining the target buffer location of the material resource according to the temperature information of the material resource in the virtual scene, the method further comprises:
and updating the historical temperature information of the material resource according to the access request of the material resource to obtain second temperature information, wherein the second temperature information is greater than the historical temperature information.
13. An apparatus for cache management, the apparatus comprising:
the determining module is used for determining a target cache position of a material resource according to temperature information of the material resource in a virtual scene, wherein the temperature information represents the probability that the material resource is accessed in the virtual scene, and the larger the temperature information is, the more the target cache position is;
the caching module is used for caching the material resources to the target caching position in a caching queue;
and the elimination module is used for carrying out cache elimination on the cached material resources of the cache queue according to the sequence of the positions from back to front.
14. An electronic device, comprising one or more processors and one or more memories having at least one program code stored therein, the at least one program code loaded and executed by the one or more processors to perform operations performed by the cache management method of any one of claims 1 to 12.
15. A computer-readable storage medium having stored therein at least one program code, which is loaded and executed by a processor to perform operations performed by the cache management method according to any one of claims 1 to 12.
CN201911180121.8A 2019-11-27 2019-11-27 Cache management method, device, equipment and storage medium Active CN110908612B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911180121.8A CN110908612B (en) 2019-11-27 2019-11-27 Cache management method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911180121.8A CN110908612B (en) 2019-11-27 2019-11-27 Cache management method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110908612A true CN110908612A (en) 2020-03-24
CN110908612B CN110908612B (en) 2022-02-22

Family

ID=69818578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911180121.8A Active CN110908612B (en) 2019-11-27 2019-11-27 Cache management method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110908612B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052097A (en) * 2020-10-15 2020-12-08 腾讯科技(深圳)有限公司 Rendering resource processing method, device and equipment for virtual scene and storage medium
CN112632347A (en) * 2021-01-14 2021-04-09 加和(北京)信息科技有限公司 Data screening control method and device and nonvolatile storage medium
CN112926206A (en) * 2021-02-25 2021-06-08 北京工业大学 Workflow engine cache elimination method based on industrial process background
CN113010551A (en) * 2021-03-02 2021-06-22 北京三快在线科技有限公司 Resource caching method and device
CN113012690A (en) * 2021-02-20 2021-06-22 苏州协同创新智能制造装备有限公司 Decoding method and device supporting field customized language model
CN113093999A (en) * 2021-05-07 2021-07-09 厦门市美亚柏科信息股份有限公司 Cache elimination method and system based on adaptive lock
CN113268440A (en) * 2021-05-26 2021-08-17 上海哔哩哔哩科技有限公司 Cache elimination method and system
CN113590031A (en) * 2021-06-30 2021-11-02 郑州云海信息技术有限公司 Cache management method, device, equipment and computer readable storage medium
CN114579269A (en) * 2022-02-08 2022-06-03 阿里巴巴(中国)有限公司 Task scheduling method and device
CN115599711A (en) * 2022-11-30 2023-01-13 苏州浪潮智能科技有限公司(Cn) Cache data processing method, system, device, equipment and computer storage medium
CN117708179A (en) * 2024-02-02 2024-03-15 成都深瑞同华科技有限公司 Method, device, equipment and medium for caching measurement point data of electric power comprehensive monitoring system
CN117992367A (en) * 2024-04-03 2024-05-07 华东交通大学 Variable cache replacement management method and system
CN117992367B (en) * 2024-04-03 2024-06-07 华东交通大学 Variable cache replacement management method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217019A (en) * 2014-09-25 2014-12-17 中国人民解放军信息工程大学 Content inquiry method and device based on multiple stages of cache modules
US20150039832A1 (en) * 2013-08-05 2015-02-05 Lsi Corporation System and Method of Caching Hinted Data
CN104507124A (en) * 2014-12-24 2015-04-08 中国科学院声学研究所 Management method for base station cache and user access processing method
CN106897030A (en) * 2017-02-28 2017-06-27 郑州云海信息技术有限公司 A kind of data cached management method and device
CN107291635A (en) * 2017-06-16 2017-10-24 郑州云海信息技术有限公司 A kind of buffer replacing method and device
CN108574853A (en) * 2018-04-23 2018-09-25 冼汉生 A kind of content of TV program caching method, device and computer storage media
CN108763103A (en) * 2018-05-24 2018-11-06 郑州云海信息技术有限公司 A kind of EMS memory management process, device, system and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150039832A1 (en) * 2013-08-05 2015-02-05 Lsi Corporation System and Method of Caching Hinted Data
CN104217019A (en) * 2014-09-25 2014-12-17 中国人民解放军信息工程大学 Content inquiry method and device based on multiple stages of cache modules
CN104507124A (en) * 2014-12-24 2015-04-08 中国科学院声学研究所 Management method for base station cache and user access processing method
CN106897030A (en) * 2017-02-28 2017-06-27 郑州云海信息技术有限公司 A kind of data cached management method and device
CN107291635A (en) * 2017-06-16 2017-10-24 郑州云海信息技术有限公司 A kind of buffer replacing method and device
CN108574853A (en) * 2018-04-23 2018-09-25 冼汉生 A kind of content of TV program caching method, device and computer storage media
CN108763103A (en) * 2018-05-24 2018-11-06 郑州云海信息技术有限公司 A kind of EMS memory management process, device, system and computer readable storage medium

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052097A (en) * 2020-10-15 2020-12-08 腾讯科技(深圳)有限公司 Rendering resource processing method, device and equipment for virtual scene and storage medium
CN112052097B (en) * 2020-10-15 2024-05-03 腾讯科技(深圳)有限公司 Virtual scene rendering resource processing method, device, equipment and storage medium
CN112632347A (en) * 2021-01-14 2021-04-09 加和(北京)信息科技有限公司 Data screening control method and device and nonvolatile storage medium
CN112632347B (en) * 2021-01-14 2024-01-23 加和(北京)信息科技有限公司 Data screening control method and device and nonvolatile storage medium
CN113012690B (en) * 2021-02-20 2023-10-10 苏州协同创新智能制造装备有限公司 Decoding method and device supporting domain customization language model
CN113012690A (en) * 2021-02-20 2021-06-22 苏州协同创新智能制造装备有限公司 Decoding method and device supporting field customized language model
CN112926206A (en) * 2021-02-25 2021-06-08 北京工业大学 Workflow engine cache elimination method based on industrial process background
CN112926206B (en) * 2021-02-25 2024-04-26 北京工业大学 Workflow engine cache elimination method based on industrial process background
CN113010551A (en) * 2021-03-02 2021-06-22 北京三快在线科技有限公司 Resource caching method and device
CN113010551B (en) * 2021-03-02 2022-05-10 北京三快在线科技有限公司 Resource caching method and device
CN113093999A (en) * 2021-05-07 2021-07-09 厦门市美亚柏科信息股份有限公司 Cache elimination method and system based on adaptive lock
CN113093999B (en) * 2021-05-07 2022-11-18 厦门市美亚柏科信息股份有限公司 Cache elimination method and system based on self-adaptive lock
CN113268440A (en) * 2021-05-26 2021-08-17 上海哔哩哔哩科技有限公司 Cache elimination method and system
CN113590031B (en) * 2021-06-30 2023-09-12 郑州云海信息技术有限公司 Cache management method, device, equipment and computer readable storage medium
CN113590031A (en) * 2021-06-30 2021-11-02 郑州云海信息技术有限公司 Cache management method, device, equipment and computer readable storage medium
CN114579269A (en) * 2022-02-08 2022-06-03 阿里巴巴(中国)有限公司 Task scheduling method and device
CN115599711A (en) * 2022-11-30 2023-01-13 苏州浪潮智能科技有限公司(Cn) Cache data processing method, system, device, equipment and computer storage medium
CN115599711B (en) * 2022-11-30 2023-03-10 苏州浪潮智能科技有限公司 Cache data processing method, system, device, equipment and computer storage medium
CN117708179A (en) * 2024-02-02 2024-03-15 成都深瑞同华科技有限公司 Method, device, equipment and medium for caching measurement point data of electric power comprehensive monitoring system
CN117708179B (en) * 2024-02-02 2024-05-03 成都深瑞同华科技有限公司 Method, device, equipment and medium for caching measurement point data of electric power comprehensive monitoring system
CN117992367A (en) * 2024-04-03 2024-05-07 华东交通大学 Variable cache replacement management method and system
CN117992367B (en) * 2024-04-03 2024-06-07 华东交通大学 Variable cache replacement management method and system

Also Published As

Publication number Publication date
CN110908612B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN110908612B (en) Cache management method, device, equipment and storage medium
CN111249730B (en) Virtual object control method, device, equipment and readable storage medium
US20200346113A1 (en) Virtual backpack interface
CN109445662B (en) Operation control method and device for virtual object, electronic equipment and storage medium
CN109529356B (en) Battle result determining method, device and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN108579088B (en) Method, apparatus and medium for controlling virtual object to pick up virtual article
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN111773696A (en) Virtual object display method, related device and storage medium
CN112494955B (en) Skill releasing method, device, terminal and storage medium for virtual object
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
WO2021143253A1 (en) Method and apparatus for operating virtual prop in virtual environment, device, and readable medium
CN112402961A (en) Interactive information display method and device, electronic equipment and storage medium
WO2023029836A1 (en) Virtual picture display method and apparatus, device, medium, and computer program product
CN110639205B (en) Operation response method, device, storage medium and terminal
CN113713383A (en) Throwing prop control method and device, computer equipment and storage medium
CN112354180A (en) Method, device and equipment for updating integral in virtual scene and storage medium
CN113730906B (en) Virtual game control method, device, equipment, medium and computer product
JPWO2021143259A5 (en)
CN110960849B (en) Interactive property control method, device, terminal and storage medium
CN112156454A (en) Virtual object generation method and device, terminal and readable storage medium
CN112169321B (en) Mode determination method, device, equipment and readable storage medium
JP2024518182A (en) Method and apparatus for displaying action effects, computer device, and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022642

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant