CN115334158A - Cache management method and device, storage medium and electronic equipment - Google Patents

Cache management method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115334158A
CN115334158A CN202210907778.5A CN202210907778A CN115334158A CN 115334158 A CN115334158 A CN 115334158A CN 202210907778 A CN202210907778 A CN 202210907778A CN 115334158 A CN115334158 A CN 115334158A
Authority
CN
China
Prior art keywords
data
cache
space
user
cache space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210907778.5A
Other languages
Chinese (zh)
Inventor
杨博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ant Consumer Finance Co ltd
Original Assignee
Chongqing Ant Consumer Finance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ant Consumer Finance Co ltd filed Critical Chongqing Ant Consumer Finance Co ltd
Priority to CN202210907778.5A priority Critical patent/CN115334158A/en
Publication of CN115334158A publication Critical patent/CN115334158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The specification discloses a cache management method, a cache management device, a storage medium and an electronic device, wherein the method comprises the following steps: by monitoring the cache fluctuation states of hot data and cold data, obtaining access user distribution information based on the cache fluctuation states, and performing cache space adjustment on a first data cache space for storing hot data and/or a second data cache space for storing cold data based on the user distribution information.

Description

Cache management method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a cache management method and apparatus, a storage medium, and an electronic device.
Background
With the popularization of the internet, when a large number of users rush into a service platform, the geometric-level rapid increase of the data volume accessed by the platform is brought, and challenges and opportunities are brought to the management of the service platform. In order to ensure normal operation of transactions between the service platform and the client, a data cache updating technology is adopted based on the service platform, so that the service platform can have a higher external processing rate.
Disclosure of Invention
The present specification provides a cache management method, an apparatus, a storage medium, and an electronic device, where the technical scheme is as follows:
in a first aspect, the present specification provides a cache management method, including:
monitoring a cache fluctuation state aiming at hot data and cold data, and acquiring access user distribution information based on the cache fluctuation state;
and based on the access user distribution information, carrying out cache space adjustment on a first data cache space and/or a second data cache space, wherein the first data cache space is a data storage space for caching the hot data, and the second data cache space is a data cache space for caching the cold data.
In a second aspect, the present specification provides a cache management apparatus, the apparatus comprising:
the information acquisition module is used for monitoring the cache fluctuation states of hot data and cold data and acquiring the distribution information of the access users based on the cache fluctuation states;
and the data processing module is used for carrying out cache space adjustment on a first data cache space and/or a second data cache space based on the access user distribution information, wherein the first data cache space is a data storage space for caching the hot data, and the second data cache space is a data cache space for caching the cold data.
In a third aspect, the present specification provides a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to carry out the above-mentioned method steps.
In a fourth aspect, the present specification provides an electronic device, which may comprise: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by some embodiments of the present description brings beneficial effects at least including:
in one or more embodiments of the present specification, the service platform dynamically adjusts the cache space of the first data cache space and/or the second data cache space based on the access user distribution information by monitoring the cache fluctuation states for hot data and cold data to obtain current access user distribution information. Based on the fluctuation of the cache amount of the hot and cold data which can be predicted to a certain extent by accessing the user distribution information, the corresponding cache space can be adjusted in advance, the cache hit rate when the data is requested is improved, and the cache processing pressure of the service platform is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the present specification or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a cache management system provided in this specification;
fig. 2 is a schematic flow chart of a cache management method provided in this specification;
FIG. 3 is a flow chart illustrating another cache management method provided in the present specification;
FIG. 4 is a flow chart illustrating another cache management method provided in the present specification;
fig. 5 is a schematic structural diagram of a cache management apparatus provided in this specification;
fig. 6 is a schematic structural diagram of an information acquisition module provided in this specification;
FIG. 7 is a block diagram of a data processing module provided in the present specification;
fig. 8 is a schematic structural diagram of another cache management apparatus provided in this specification;
fig. 9 is a schematic structural diagram of an electronic device provided in this specification;
FIG. 10 is a schematic diagram of the operating system and user space provided in this specification;
FIG. 11 is an architectural diagram of the android operating system of FIG. 10;
FIG. 12 is an architecture diagram of the IOS operating system of FIG. 10.
Detailed Description
The technical solutions in the present specification will be clearly and completely described below with reference to the drawings in the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the related art, cache management is an important means for improving the response speed of a service platform and reducing service pressure, generally, because of cost factors, the service platform has a limited cache space, and because of the limitation of the cache space, the service platform generally performs cache optimization based on a set cache elimination policy after allocating cache spaces for several types of cache data respectively, but when the cache pressure caused by the flooding of a large number of users is encountered, the cache pressure cannot be effectively relieved by adopting the manner.
The present application will be described in detail with reference to specific examples.
Please refer to fig. 1, which is a schematic view of a cache management system according to the present disclosure. As shown in fig. 1, the cache management system may include at least a client cluster and a service platform 100.
The client cluster may include at least one client, as shown in fig. 1, specifically including client 1 corresponding to user 1, client 2, \8230correspondingto user 2, and client n corresponding to user n, where n is an integer greater than 0.
Each client in the client cluster may be a communication-enabled electronic device including, but not limited to: wearable devices, handheld devices, personal computers, tablet computers, in-vehicle devices, smart phones, computing devices or other processing devices connected to a wireless modem, and the like. Electronic devices in different networks may be called different names, such as: user equipment, access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, user terminal, wireless communication device, user agent or user equipment, cellular telephone, cordless telephone, personal Digital Assistant (PDA), electronic device in a 5G network or future evolution network, and the like.
The service platform 100 may be a separate server device, such as: rack, blade, tower or cabinet type server equipment, or hardware equipment with stronger computing power such as a workstation and a large computer; the server cluster may also be a server cluster composed of a plurality of servers, each server in the service cluster may be composed in a symmetric manner, where each server is functionally equivalent and functionally equivalent in the transaction link, and each server may provide services to the outside independently, where the independent provision of services may be understood as no assistance from another server.
In one or more embodiments of the present description, the service platform 100 and at least one client in the client cluster may establish a communication connection based on which interaction of service data is accomplished. The service platform 100 may provide at least one client in the client cluster with a corresponding transaction service, where the transaction service includes, but is not limited to, a consumption service, a shopping service, a financial service, a credit service, and the like, and a specific transaction service type is determined based on an actual application situation.
In the process that the service platform 100 provides transaction services for several clients, under the scenes such as the release of related transaction activities, platform hot events and the like, a large number of users can access the service platform 100 through the held clients. In order to better provide transaction service to the outside, the service platform 100 may introduce distributed service deployment and platform cache construction in advance, and under the sudden client access traffic in some scenarios, the access difference of new and old users is large, so that it is difficult for a general cache policy to flexibly optimize cache.
In one or more embodiments of the present disclosure, the service platform 100 stores various types of service data, and the client accesses or queries corresponding service data of the service platform 100 based on an actual scenario, so as to avoid a heavy burden of a database maintained by the service platform 100, a deteriorated database response, a delayed website display, and other significant effects in scenarios such as an increased data volume, a centralized access, and the like. The service platform 100 may employ a caching mechanism for data access optimization,
illustratively, a cache principle of the cache mechanism may be that when data requested by a user corresponding to a client is checked, whether the requested data exists in a cache space maintained by the service platform 100, and if the requested data exists, the requested data is directly returned without querying a database.
If the requested data cannot be queried in the cache space, the service platform 100 queries the database, returns the data, and stores the data in the cache.
Illustratively, the service platform 100 may maintain "freshness" of the cache, and the service platform 100 updates the cache information of the cache space synchronously whenever the data changes, so as to ensure that the user does not fetch old data in the cache space.
It should be noted that the service platform 100 establishes a communication connection with at least one client in the client cluster to perform interactive communication through a network, where the network may be a wireless network including but not limited to a cellular network, a wireless local area network, an infrared network, or a bluetooth network, and the wired network includes but not limited to an ethernet network, a Universal Serial Bus (USB), or a controller area network. In one or more embodiments of the specification, data (e.g., object compressed packets) exchanged over a network is represented using techniques and/or formats including Hyper Text Markup Language (HTML), extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), transport Layer Security (TLS), virtual Private Network (VPN), internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
The embodiment of the cache management system provided in this specification and the cache management method in one or more embodiments belong to the same concept, and an execution subject corresponding to the cache management method in one or more embodiments of the specification may be the service platform 100 described above; the execution subject corresponding to the cache management method according to one or more embodiments of the present disclosure may also be determined by other electronic devices, specifically based on an actual application environment. The implementation process of the embodiment of the cache management system can be seen in the following method embodiments, which are not described herein again.
Based on the scene diagram shown in fig. 1, the following describes in detail a model processing method provided in one or more embodiments of the present specification.
Referring to fig. 2, a flow chart of a cache management method, which can be implemented by relying on a computer program and can run on a von neumann-based cache management device, is provided for one or more embodiments of the present disclosure. The computer program may be integrated into the application or may run as a separate tool-like application. The cache management device may be a service platform.
Specifically, the cache management method includes:
s102: monitoring the cache fluctuation state aiming at hot data and cold data, and acquiring access user distribution information based on the cache fluctuation state;
in a practical application scenario based on a service platform, service data such as operation logs and service information are collected according to access frequency or based on access operations of users, the service data is divided into cold data and hot data according to data access heat, and then the data is stored or migrated to a corresponding storage space (such as a database) according to types of the cold data and the hot data.
According to some embodiments, a service platform generally adopts a cache mechanism to optimize data access, the service platform can maintain a cache space and a database for storing service data, and a cache principle of the cache mechanism can be that when a client initiates a user request to request corresponding data, the service platform checks request data of the client corresponding to the user request, the service platform firstly queries whether the request data exists in the maintained cache space, and if so, the service platform directly returns the request data to the client without querying the database. If the requested data cannot be queried in the cache space, the service platform 100 queries the database, returns the requested data, and stores the requested data into the cache space.
It can be understood that the service platform can maintain the "freshness" of the cache in the data processing process, and the service platform can synchronously update the cache information of the cache space whenever the service data changes, so as to ensure that the user does not fetch the old service data in the cache space.
It can be understood that in a practical application scenario of the service platform, the service platform generally divides the service data into cold data and hot data according to the data heat degree, the cold data and the hot data may be stored in the cache space, and the cold data and the hot data may be stored in the database.
The cache fluctuation state is used for representing the cache fluctuation condition of the user accessing the cold data and/or the hot data in the cache. The cache surge state may be characterized based on an amount of access surge for cold data and/or hot data.
It can be understood that the service platform may monitor the cache fluctuation amount of the cold data and/or the hot data in real time or periodically to determine the cache fluctuation state, and in a state where the fluctuation ratio of the cold data and the hot data is relatively large or stable, the service platform may automatically monitor and perform processing by executing the cache management method related to this specification.
Specifically, the service platform may monitor the cache fluctuation amount of the cold data and/or the hot data in real time or periodically to determine the cache fluctuation state, and obtain the access user distribution information based on the cache fluctuation state.
The access user distribution information is used for feeding back the user type distribution condition of the access user type dimension of the users of different access user types of the platform, and the access user distribution information can be the fitting of one or more of information such as the user quantity of a plurality of access user types, the user type distribution condition, new and old user fluctuation data and the like.
Optionally, the service platform obtains the access user distribution information, and may obtain user access data of each user type in the monitoring window time according to user types divided by the access users for the service platform, and determine user fluctuation data of a corresponding type based on the user access data of each user type, and use the user fluctuation data as the access user distribution information.
Illustratively, the access user distribution information may be fluctuation data of the old and new users, such as fluctuation ratio of the old and new users, fluctuation amount of the old and new users, and the like. The service platform can acquire (within a monitoring window time) first user access data aiming at a new user type and second user access data aiming at an old user type;
the first user access data can be user access data such as user access number, user access increase number and the like corresponding to the new user type (in the monitoring window time); the second user access data may be user access data such as a user access number corresponding to an old user type (within a monitoring window time), a user access increase number, and the like.
The new user type and the old user type are user types divided by the access user for the service platform, for example, the access user with the number of times of accessing the platform smaller than the number threshold (for example, 1 time) is divided into the new user type, the access user with the number of times of accessing the platform larger than the number threshold is divided into the old user type, and the type division standard of the new user type and the old user type can be customized based on the actual application scenario, which is not specifically limited here.
New and old user fluctuation data are then determined based on the first user access data and the second user access data, and the new and old user fluctuation data can be new and old user fluctuation ratio parameters such as new user (increase/decrease) fluctuation ratio, old user (increase/decrease) fluctuation ratio and the like. Then, the fluctuation data of the new and old users are used as the distribution information of the access users;
the monitoring window time may be understood as a window time set by the service platform and used for monitoring access traffic of a user, generally, for a case of a steady traffic, the service platform dynamically sets an adjustment window time at an hour level or a minute level, and for an abrupt increase traffic (for example, the access traffic is greater than a threshold), the adjustment window time dynamically set by the service platform is smaller, for example, the adjustment window time is set at a second level.
S104: and based on the access user distribution information, carrying out cache space adjustment on a first data cache space and/or a second data cache space, wherein the first data cache space is a data storage space for caching the hot data, and the second data cache space is a data cache space for caching the cold data.
It can be understood that, in one or more embodiments of the present specification, cache adjustment may be dynamically performed on a first data storage space for caching hot data or a second data storage space for caching cold data based on an actual situation, so that utilization efficiency of a cache space maintained by a service platform may be improved, a cache load condition is alleviated, robustness of cache processing is improved, and by adjusting the cache space, a cache hit rate when a user requests to access data may be improved, and pressure on a back-end database may be reduced.
In one or more embodiments of the present description, the service platform may perform dynamic cache space adjustment on the first data cache space and/or the second data cache space in advance by monitoring a cache fluctuation state for hot data and cold data, dynamically obtaining current access user distribution information based on the cache fluctuation state, and based on a subsequent cold and hot cache change condition predicted by the access user distribution information; as can be appreciated, in one aspect, based on the access user distribution information, which may predict fluctuation of the cache amount of hot data or cold data to some extent, in practical applications, it may be indicated in case of user access surge of old user type according to the access user distribution information: the hot data cache management has larger follow-up general processing calculation amount and more hot data needs to be stored, and according to the access user distribution information, the cold data cache management has larger follow-up general processing calculation amount and more cold data needs to be stored under the condition that the access of users of new user types is increased rapidly. Based on the method, the demand quantity of the subsequent hot data or cold data can be predicted in advance based on the access user distribution information, and then the cache space of the corresponding data can be adjusted in advance. On the other hand, by adopting the mode of adjusting the data cache space of the corresponding data, frequent migration of cache positions of cold data/hot data under the scene with large access fluctuation can be avoided, so that the time complexity of cache management and the load of a cache system are avoided being aggravated, and the cost of cache management is greatly reduced to a certain extent by adopting the mode of correspondingly adjusting the cache space.
In a possible implementation manner, the service platform performs analysis processing based on the access user distribution information, may determine a target adjustment cache space to be adjusted in the first data cache space and/or the second data cache space, and then performs cache space adjustment on the target adjustment cache space. For example, analyzing the access user distribution information, and if the access user distribution information indicates that the subsequent thermal data management usually has a large processing calculation amount and more thermal data need to be stored, adjusting the cache space to be the first data cache space by the target; if the access user distribution information indicates that the subsequent cold data management generally has large processing calculation amount and more cold data need to be stored, the target adjustment cache space is a second data cache space;
optionally, the analysis processing based on the access user distribution information may be performed by determining that a subsequent cold data cache management requirement is large or a cold data cache management requirement is large according to the access user distribution information, and further determining a target adjustment cache space to be adjusted in the first data cache space and the second data cache space.
Illustratively, it may be determined from the access user distribution information that the user distribution occupancy or the user distribution fluctuation is large, such as a new user type/an old user type, and in the case where the user distribution occupancy or the user distribution fluctuation is large, it may be determined that the demand for subsequent cold data cache management is large or the demand for cold data cache management is large, and then a second data cache space for storing cold data is added; in the case where the proportion of users of the old user type or the fluctuation of the user distribution is large, it may be determined that the demand for the subsequent thermal data cache management is large or the demand for the thermal data cache management is large, and then the first data cache space for storing the thermal data is increased.
In one possible implementation, the first data cache space is cache evicted using a least recently used caching mechanism and the second data cache space is cache evicted using a least frequently used caching mechanism. It can be understood that, based on the monitored cache fluctuation condition of the cold and hot data, different elimination strategies are adopted for the first data cache space for caching the hot data and the second data cache space for caching the cold data for cache elimination, so that the cache hit rate is improved.
In one or more embodiments of the present specification, the service platform dynamically adjusts the cache space of the first data cache space and/or the second data cache space based on the access user distribution information by monitoring the cache fluctuation states for hot data and cold data to obtain current access user distribution information. Based on the fluctuation of the cache amount of the hot and cold data which can be predicted to a certain extent by accessing the user distribution information, the corresponding cache space can be adjusted in advance, the cache hit rate when the data is requested is improved, and the processing pressure of the service platform is reduced; and the mode of migrating the cache space of the cache data is avoided, so that the time complexity of cache management and the load of a cache system can be reduced, and the cost of cache management is saved.
Referring to fig. 3, fig. 3 is a schematic flowchart of another embodiment of a cache management method according to one or more embodiments of the present disclosure. Specifically, the method comprises the following steps:
s202: monitoring a cache fluctuation ratio for hot data and cold data;
the cache fluctuation ratio may be one or more of a hot data cache fluctuation (increase/decrease) ratio, a cold data cache fluctuation (increase/decrease) ratio, a hot data cache and cold data cache fluctuation ratio, and the like, which are fitted to the monitored cycle.
It can be understood that the service platform may monitor the cache fluctuation amount of the cold data and/or the hot data in real time or periodically to determine the cache fluctuation ratio, and in a state where the cache fluctuation ratio is large or stable, the service platform may automatically monitor and perform processing by executing the cache management method related to this specification.
S204: if the cache fluctuation ratio is larger than or equal to the fluctuation ratio threshold, acquiring access user distribution information;
the fluctuation ratio threshold may be understood as a threshold value or a critical value set for the cache fluctuation ratio, and different types of cache fluctuation ratios may set corresponding fluctuation ratio threshold values respectively. The fluctuation proportion threshold value is used for monitoring the current access user distribution condition of the service platform to predict or evaluate the cold and hot data cache change trend when the cache fluctuation proportion indication reaches the critical value.
The visiting user distribution information may be a fit of one or more of information such as the number of users of several visiting user types, user type distribution status, new and old user fluctuation data, and the like.
In one or more embodiments of the present specification, the visiting user distribution information may be new and old user fluctuation data, such as a new and old user fluctuation ratio, a new and old user fluctuation amount, and the like, the service platform may obtain (within a monitoring window time) first user visiting data for a new user type and second user visiting data for an old user type, and then determine the new and old user fluctuation data based on the first user visiting data and the second user visiting data, where the new and old user fluctuation data is also visiting user distribution information, which may feed back a distribution change of user visiting amounts on the new user type and the old user type.
Optionally, if the cache fluctuation ratio is greater than or equal to the fluctuation ratio threshold, the service platform may ignore the processing.
S206: determining a target cache adjusting mode based on the access user distribution information;
the target cache adjusting mode at least comprises a cache dynamic adjusting mode and a cache period adjusting mode based on actual application conditions.
In one or more embodiments of the present specification, the dynamic cache adjustment manner is generally applicable to a manner of adjusting the cache space in a severe fluctuation state, and in the dynamic cache adjustment manner, when the dynamic cache adjustment manner is used for accessing user distribution information, such as new and old user fluctuation data, within a monitoring window time, the cache space of the first data cache space and/or the cache space of the second data cache space is dynamically adjusted.
In one or more embodiments of the present disclosure, the buffer period adjustment manner is generally applicable to a manner of adjusting the buffer space in a fluctuation stationary state. And triggering access user distribution information such as new and old user fluctuation data and the like at fixed period moment in a cache period adjusting mode based on a certain period time, and adjusting the cache space of the first data cache space and/or the second data cache space.
In a feasible implementation manner, taking the access user distribution information as new and old user fluctuation data as an example, the service platform performs determining a target cache adjustment manner based on the access user distribution information, which may specifically be:
the service platform can determine a user traffic fluctuation state for the service platform based on new and old user fluctuation data; the user flow fluctuation state can at least set a fluctuation violent state and a fluctuation smooth state based on the practical application condition.
Illustratively, fluctuation threshold data can be set for fluctuation data of the old and the new users, and when the fluctuation data of the old and the new users meet the fluctuation threshold data, the user flow fluctuation state is considered to be a fluctuation stable state; and when the fluctuation data of the new and old users do not meet the fluctuation threshold data, the user flow fluctuation state is considered to be a severe fluctuation state.
For example, the old and new user fluctuation data may be old and new user fluctuation ratio parameters, such as new user (increase/decrease) fluctuation ratio, old user (increase/decrease) fluctuation ratio, and the like. The fluctuation threshold data may be a new user (increase/decrease) fluctuation threshold set for a new user (increase/decrease) fluctuation ratio, and the fluctuation threshold data may be an old user (increase/decrease) fluctuation threshold set for an old user (increase/decrease) fluctuation ratio.
If the user flow fluctuation state is a severe fluctuation state, the service platform determines that the target cache regulation mode is a cache dynamic regulation mode;
and if the user flow fluctuation state is a fluctuation stable state, the service platform determines that the target cache regulation mode is a cache period regulation mode.
In one possible embodiment, the new and old user fluctuation data includes user growth and proportion of new and old users; the user growth amount may be understood as the growth amount of a new user type or an old user type within the monitoring window time; the proportion of new and old users can be understood as the ratio data of the new user type or the old user type to the number of all users. The service platform can jointly judge the flow fluctuation state of the user by combining the increase of the user and the proportion of the new and old users.
Illustratively, the service platform may compare the amount of user growth to the magnitude of the increment threshold, and the weights of the old and new users to the user weight threshold. The increment threshold is a threshold value set for the increment of the user, and the user proportion threshold is a threshold value set for the proportion of the new and old users.
And if the user growth is larger than an increment threshold and the proportion of the old and new users is larger than a user proportion threshold, determining that the user flow fluctuation state aiming at the service platform is the violent fluctuation state. And otherwise, determining that the user flow fluctuation state aiming at the service platform is a fluctuation stable state.
S208: and regulating the cache space of the first data cache space and/or the second data cache space by adopting the target cache regulation mode.
In a possible implementation manner, taking the target cache adjusting manner as a cache cycle adjusting manner as an example, the cache cycle adjusting manner is generally applicable to a manner of adjusting the cache space in a steady fluctuation state. And triggering access user distribution information such as new and old user fluctuation data at a fixed period moment based on a certain period time under a buffer period regulation mode, and carrying out space fine adjustment on the buffer space of the first data buffer space and/or the second data buffer space. Furthermore, under the fluctuation stationary state, the first data cache space and the second data cache space can meet the current cold and hot data cache requirements at high probability, the service platform only needs to access user distribution information or current cold and hot data fluctuation parameters (such as cold and hot data fluctuation range and cold and hot data fluctuation amount) to perform fine adjustment on the first data cache space and the second data cache space adaptively, and the fine-adjusted space adjustment range is smaller than the set fine-adjustment threshold.
Illustratively, in the cache period adjustment mode, the spatial adjustment range usually fine-tuned by the service platform is smaller than that of the cache dynamic adjustment mode.
In a feasible implementation manner, taking a target cache adjustment manner as a cache dynamic adjustment manner as an example, the cache dynamic adjustment manner is generally applicable to a manner of adjusting a cache space in a severe fluctuation state, and the performing, by the service platform, cache space adjustment on the first data cache space and/or the second data cache space by using the target cache adjustment manner may be:
1. the service platform can determine a sliding window adjusting area and a target adjusting cache space by adopting the cache dynamic adjusting mode;
the sliding window adjustment region is a buffer region allocated to the buffer space (one of the first data buffer space and the second data buffer space) determined to need to be adjusted, and may be understood as allocating the sliding window adjustment region to the buffer space needing to be adjusted.
The target adjustment cache space may be understood as the cache space that needs to be adjusted, where the target adjustment cache space is one of the first data cache space and the second data cache space, and if the cold data cache demand is higher than the hot data cache demand, the target adjustment cache space is usually the second data cache space, that is, the determined sliding window adjustment region is allocated to the second data cache space; if the hot data cache requirement is higher than the cold data cache requirement, the target adjustment cache space is usually the first data cache space, that is, the determined sliding window adjustment region is allocated to the first data cache space.
Optionally, a cache adjustment ratio of each cache dynamic adjustment mode may be set, a sliding window adjustment region is determined according to the cache adjustment ratio in the cache dynamic adjustment mode, if the cache adjustment ratio may be 10%, 10% of cache space is determined from the available cache space (e.g., idle cache space) as the sliding window adjustment region, the sliding window adjustment region is allocated to the target adjustment cache space in a manner similar to that of a sliding window, and in practical applications, the sliding window adjustment region is usually associated with the target adjustment cache space.
Illustratively, a reference cache adjusting ratio may be set, and the adjustment is performed according to the reference cache adjusting ratio during each dynamic cache adjustment.
Illustratively, the dynamic cache adjustment mode usually involves multiple rounds of adjustment in the monitoring process, after the first round determines the sliding window adjustment region according to the reference cache adjustment ratio and allocates the sliding window adjustment region to the target adjustment cache space, fluctuation parameters of new and old user fluctuation data or cold and hot data can be monitored to measure the degree of the severe fluctuation state, for example, fluctuation factors can be evaluated based on the fluctuation parameters of the new and old user fluctuation data or the cold and hot data, the reference cache adjustment ratio is adjusted based on the fluctuation factors, the sliding window adjustment region of the next round is determined in the next round of cache space adjustment process based on the adjusted cache adjustment ratio, and then the target adjustment cache space of the next round is adjusted.
In a possible embodiment, the cache adjustment ratio may be dynamic, and the service platform may dynamically determine the cache adjustment ratio based on the monitored access user distribution information in real time, determine the sliding window adjustment region based on the cache adjustment ratio, e.g., determine the sliding window adjustment region from an available cache space (e.g., an idle cache space) based on the cache adjustment ratio.
Illustratively, the service platform may monitor at least one monitoring parameter in the user access distribution information in real time, for example, the monitoring parameter may be one or more of a fluctuation ratio of new and old users, a fluctuation quantity of new and old users, a user quantity of a plurality of access user types, a user type distribution condition parameter, and the like, and may determine a cache adjustment ratio based on a weighted evaluation value by weighting each type of monitoring parameter, for example, directly taking the evaluation value as the cache adjustment ratio.
Illustratively, a plurality of reference evaluation ranges may be set, each reference evaluation range corresponds to a reference buffer adjustment ratio, and the reference buffer adjustment ratio corresponding to the reference evaluation range is obtained as the buffer adjustment ratio by determining the reference evaluation range in which the evaluation value falls.
2. And performing cache space adjustment on a target adjustment cache space based on the sliding window adjustment area, wherein the target adjustment cache space is at least one of the first data cache space and the second data cache space.
Specifically, after determining the sliding window adjustment region from the cache space that can be allocated, the service platform associates the sliding window adjustment region with the target adjustment cache space, that is, allocates the sliding window adjustment region to the target adjustment cache space.
In a specific implementation scenario, the total cache space of the service platform may be composed of a first data cache space and a second data cache space, that is, the service platform divides the total cache space into the first data cache space and the second data cache space for caching the hot data in advance. In a dynamic cache adjusting mode, a form similar to a sliding window is adopted, for example, a decision is made that more first data cache spaces are needed at present and that the first data cache spaces are also used as target adjustment cache spaces, a sliding window adjusting area can be selected from a second data cache space to divide the target adjustment cache spaces, the time complexity of cache management and the load of a cache system can be avoided aggravated in the whole cache adjusting process, the cost of cache management is greatly reduced to a certain extent by adopting a corresponding cache space adjusting mode, and the utilization efficiency of the cache spaces is improved.
Further, the service platform executes the determining of the sliding window adjustment area and the target adjustment cache space by using the cache dynamic adjustment mode, and performs cache space adjustment on the target adjustment cache space based on the sliding window adjustment area, which may be:
a2, the service platform can acquire new and old user fluctuation data and/or cold and hot data fluctuation parameters, and determines a target regulation cache space from a first data cache space and a second data cache space based on the new and old user fluctuation data and/or the cold and hot data fluctuation parameters;
it can be understood that the demand of subsequent cold and hot data cache management can be determined according to the fluctuation data of the new and old users and/or the fluctuation parameters of the cold and hot data, and then the cache space is adjusted from the target to be adjusted in the first data cache space and the second data cache space based on the demand of the cold and hot data cache management. Such as: according to the access user distribution information, the user distribution occupation ratio or the user distribution fluctuation of a new user type/an old user type can be determined to be large, in the illustrative case that the user distribution occupation ratio or the user distribution fluctuation of the new user type is large, the requirement of subsequent cold data cache management can be determined to be large or the requirement of cold data cache management is large, and then a second data cache space for storing cold data is increased; in the case where the user distribution ratio of the old user type or the user distribution fluctuation is large, it may be determined that the demand for the subsequent hot data cache management is large or the demand for the hot data cache management is large, and then the first data cache space for storing the hot data is increased.
A4: the service platform may determine a first cache adjustment ratio for the third data cache space; the third data cache space is a data cache space of the first data cache space and the second data cache space except the target adjustment cache space.
It can be understood that the third data buffer space is the allocable buffer space, and if the buffer adjustment ratio may be 10%, 10% of the buffer space is determined as the sliding window adjustment area from the allocable third data buffer space.
Optionally, the first cache adjusting ratio may be a set reference cache adjusting ratio, and the first cache adjusting ratio is adjusted according to the reference cache adjusting ratio during each dynamic cache adjustment.
Optionally, the first cache adjustment ratio may be dynamic, and the service platform may dynamically determine the first cache adjustment ratio based on the monitored access user distribution information in real time, and determine the sliding window adjustment area based on the first cache adjustment ratio, for example, determine the sliding window adjustment area from the third data cache space that may be allocated based on the first cache adjustment ratio. For the specific dynamic determination of the first cache adjustment ratio, the explanation of the dynamic determination of the cache adjustment ratio may be referred to, and details are not repeated herein.
A6: the service platform, after determining the first cache adjustment ratio, may select a first sliding window adjustment region from the third data cache space based on the first cache adjustment ratio, associate the first sliding window adjustment region with the target adjustment cache space,
it will be appreciated that the first sliding window adjustment region is typically not a buffer region in which cold/hot data has been buffered.
In one or more embodiments of the present disclosure, the total cache space of the service platform may set a lower limit ratio with respect to the total cache space for the first cache space and the second cache space, and the space ratio of the first cache space and the second cache space needs to be greater than or equal to the lower limit ratio.
Optionally, for the cache space for caching cold and hot data, generally, the same cache elimination mechanism is adopted for each type of cache space to update the cold/hot data in the cache space, but in a severe fluctuation state, the above manner is difficult to adapt to the current scene, and based on this, the cache elimination mechanism adjustment is performed on the first data cache space and/or the second data cache space based on the current target cache adjustment manner.
Illustratively, generally, each type of cache space employs a cache eviction mechanism for cache management, and in one or more embodiments of the present specification, if the target cache adjustment mode is a cache dynamic adjustment mode, different cache eviction mechanisms may be employed for the first data cache space and the second data cache space. If the target cache adjusting mode is a cache period adjusting mode, different cache elimination mechanisms can not be started for cache elimination management, and the original cache elimination mechanisms can be used for control management.
In one or more embodiments of the present specification, in consideration of the influence of the new and old user types on the subsequent cold and hot data caching requirement, if the target caching adjustment mode is a dynamic caching adjustment mode, the first data caching space may be cache-evicted by using a least recently used caching mechanism (LRU mechanism), and the second data caching space may be cache-evicted by using a least frequently used caching mechanism (LFU mechanism), so that the subsequent cold and hot data may be efficiently updated in combination with the cold and hot data caching requirement.
In a specific implementation scenario, the total cache space of the service platform may be composed of a first data cache space, a second data cache space and a reserved cache space, and by setting the reserved cache space, the type of the regional data cache may be dynamically adjusted and a corresponding data cache elimination policy may be updated in real time based on the cold and hot data cache requirements.
In a dynamic cache adjusting mode, all or part of the reserved cache space is selected as a sliding window adjusting area in a sliding window-like mode, for example, it is decided that more first data cache spaces are needed currently and the first data cache space is also used as a target adjusting cache space, then the sliding window adjusting area can be selected from the reserved cache space to be divided into the target adjusting cache space, the time complexity of cache management and the load of a cache system can be avoided from being aggravated in the whole cache adjusting process, the cost of cache management is greatly reduced to a certain extent by adopting a mode of correspondingly adjusting the cache space, and the utilization efficiency of the cache space is improved.
Further, the service platform executes the determining of the sliding window adjustment area and the target adjustment cache space by using the cache dynamic adjustment mode, and performs cache space adjustment on the target adjustment cache space based on the sliding window adjustment area, which may be:
b2: the service platform can acquire fluctuation data and/or fluctuation parameters of cold and hot data of new and old users, and determines a target regulation cache space from a first data cache space and a second data cache space based on the fluctuation data and/or the fluctuation parameters of the cold and hot data of the new and old users;
b4: acquiring a reserved cache space, and determining a second cache adjusting proportion aiming at the reserved cache space; the reserved cache space is a data cache space except the first data cache space and the second data cache space;
it can be understood that, the reserved buffer space is used as the allocable buffer space, and if the buffer adjustment ratio may be 10%, 10% of the reserved buffer space that can be allocated is determined as the sliding window adjustment area.
Optionally, the second cache adjusting ratio may be a set reference cache adjusting ratio, and the second cache adjusting ratio is adjusted according to the reference cache adjusting ratio during each dynamic cache adjustment.
Optionally, the second cache adjustment ratio may be dynamic, and the service platform may dynamically determine the second cache adjustment ratio based on the monitored access user distribution information in real time, and determine the sliding window adjustment area based on the second cache adjustment ratio, for example, determine the sliding window adjustment area from the reserved cache space that may be allocated based on the second cache adjustment ratio. For the specific dynamic determination of the second cache adjustment ratio, the explanation of the dynamic determination of the cache adjustment ratio may be referred to, and details are not repeated herein.
Optionally, the service platform may continuously perform monitoring, and when it is monitored that the user traffic fluctuation state of the service platform is updated from a severe fluctuation state to a steady fluctuation state, the service platform may release the determined sliding window adjustment region, that is, release the sliding window adjustment region to be reset as the cache region corresponding to the reserved cache space.
B6: and selecting a second sliding window adjusting area from the reserved cache space based on the second cache adjusting proportion, and associating the second sliding window adjusting area with a target adjusting cache space.
It will be appreciated that the second sliding window adjustment region is not typically a buffer region where cold/hot data has already been buffered.
In one or more embodiments of the present specification, the service platform dynamically adjusts the cache space of the first data cache space and/or the second data cache space based on the access user distribution information by monitoring the cache fluctuation states for hot data and cold data to obtain current access user distribution information. Fluctuation of the cache amount of the hot and cold data can be predicted to a certain extent based on the access user distribution information, the corresponding cache space can be adjusted in advance, the cache hit rate when the data is requested is improved, and the processing pressure of the service platform is reduced; the mode of migrating the cache space of the cache data is avoided, so that the time complexity of cache management and the load of a cache system can be reduced, and the cost of the cache management is saved; and in a severe fluctuation state, the platform can automatically and accurately monitor and decide a proper cache elimination mechanism for cache management.
Referring to fig. 3, fig. 3 is a schematic flowchart of another embodiment of a cache management method according to one or more embodiments of the present disclosure. Specifically, the method comprises the following steps:
s302: monitoring the cache fluctuation state aiming at hot data and cold data, and acquiring access user distribution information based on the cache fluctuation state;
s304: based on the access user distribution information, carrying out cache space adjustment on the first data cache space and/or the second data cache space;
s306: acquiring first cache data to be stored aiming at the first data cache space and/or the second data cache space;
the first cache data may be understood as cache data to be stored by the service platform, and may be a hot data type to be stored in the first data cache space, or may be a cold data type to be stored in the second data cache space.
Illustratively, a service platform generally adopts a cache mechanism to optimize data access, the service platform may maintain a cache space and a database for storing service data, when a client initiates a user request to request corresponding data, the service platform checks request data of the client corresponding to the user request, the service platform first queries whether the request data exists in the maintained cache space (a first cache space and a second cache space), if the request data cannot be queried in the cache space, the service platform queries the database and returns the request data, it can be understood that the service platform may store the request data in the cache space (the first cache space and the second cache space), and further, the request data to be stored is the first cache data to be stored in the cache space (the first cache space and the second cache space).
Illustratively, the service platform may update the cache information data of the cache space synchronously whenever the service data changes, so as to ensure that the user does not fetch old cache information data in the cache space, and further, the cache information data to be updated synchronously is the first cache data to be stored in the cache space (the first cache space, the second cache space) to update the old cache information data synchronously.
S308: performing target coding processing on the first cache data to obtain second cache coded data; the second cache coding data is cache coding data which does not contain key element data in the key value pair.
In the related art, when first cache information to be stored is stored in a cache space, the first cache information is usually encoded and stored in a cache encoding manner such as a Key-value pair (Key-value), wherein the Key is a keyword, the value is a value, and the Key-value is a data encoding and storing form for storing data in the Key-value pair;
illustratively, the first cache data is user information as an example, and when the first cache data is stored in the cache space, the user information is stored as "user index + user information", and the user information is usually encoded in key-value, such as a common json format, the user index is id of the user, and the user information is accessed { "name": three-fold "," years ":20}.
In one or more embodiments of the present specification, the first cached data is not cached by using the aforementioned key-value encoding, but a target encoding manner is used, so that the second cached encoded data after the target encoding does not contain the key element data in the key value pair.
Illustratively, taking the user information access { "name": three times "," years ":20} as an example, only after the target encoding process, only the values value-" three times "and" 20 "of the object are cached, and the key elements key-" name "and" years "of the object are not cached. That is, the second cache encoded data does not contain the key element data key in the key value pair.
In a feasible implementation manner, the service platform performs the target encoding processing on the first cache data to obtain second cache encoded data, which may specifically be:
and C2, determining first cache key information corresponding to the first cache data, wherein the first cache key information comprises key element data and value element data generated based on a key-value pair coding mode.
The first cache key information may be understood as cache information encoded in a key-value pair encoding manner, and taking the first cache data as user information as an example, the "user information" in the "user index + user information" of the first cache data is also the first cache key information.
The first cache key information is generated based on a key-value pair encoding mode and comprises key element data and value element data. Data { "name": zhang three "," years ":20 }: "name" and "years" are key element data, and "zhang san" and 20 are value element data.
And C4, determining a separator aiming at the key element data, and carrying out coding processing based on the value element data and the separator to obtain second cache data.
It can be understood that the key element data can be replaced by separators, and the space occupied by the separators is smaller than that of the key element data, so that the space occupied by cache coding can be greatly saved.
Alternatively, the segmenter may only serve to distinguish different types of value element data, and the value element data indicated before and after the segmenter are values corresponding to key elements belonging to different types. Data { "name": zhang three ": years": 20}, then "zhang three # #20" can be output after target coding, wherein "##" is the segmentation symbol.
Optionally, the segmenter may be associated with the key element, that is, a mapping relationship between key elements of different categories and their corresponding reference segmenters is preset, and the segmenter for the key element data may be determined based on the mapping relationship.
It is to be understood that after the delimiter for the key element data is determined, an encoding process is performed based on the value element data and the delimiter to replace the key element data with the delimiter, and the second cache data may be generated based on the replaced delimiter and the value element data encoding. Illustratively, taking data { "name": "Liquan", "years": 30} as an example for encoding processing, a keyword element "name" and "years" is replaced by a segmentation symbol ", and the obtained second cache data may be" Liquan 30".
S310: and carrying out cache storage on the second cache data.
It can be understood that the second cached data is cached into the data caching space corresponding to the data type based on the data type corresponding to the second cached data. If the data type corresponding to the second cache data is the hot data type, the second cache data is cached into the first data cache space, the method is applicable to a service scene with limited cache space, and the problem of high load pressure of a service platform can be effectively solved.
Illustratively, the data type corresponding to the second cache data is the same as the data type corresponding to the first cache data.
S312: responding to a data query request of a client, and acquiring second cache data;
the data query request is initiated by the client to the service platform, the service platform provides corresponding transaction service to the outside, and a user of the client can initiate the data query request to the service platform at the client based on actual requirements so as to acquire corresponding service data.
It can be understood that, the service platform receives a data query request from the client, responds to the data query request from the client, and first queries whether there is requested data indicated by the data query request in the maintained cache spaces (the first cache space and the second cache space), and if there is the requested data, obtains the requested data from the cache spaces (the first cache space and the second cache space) and uses the requested data as the second cache data that is queried.
Illustratively, the service platform may provide a near-end service to the outside, generally, the near-end service is an sdk (software development kit) installed in the client, a data query request initiated by the client is cooperatively processed by the near-end service and the service platform, and the service platform may receive the data query request based on the near-end service and obtain the second cache data.
S314: performing target decoding processing on the second cache data to obtain third cache data, and sending the third cache data to a client;
the target decoding process can be understood as the inverse of the target encoding process.
In a possible implementation manner, the performing target decoding processing on the second buffered data to obtain third buffered data includes:
determining user index information and second user information corresponding to the second cache data, wherein the second user information comprises value element data and separators;
and determining key element data based on the value element data and the separator, and performing coding processing based on the user index information, the value element data and the key element data to obtain third cache coding data.
The service platform executes the target decoding processing on the second cache data to obtain third cache data, which may be:
d2, determining second cache key information corresponding to the second cache data, wherein the second cache key information comprises value element data and separators;
in practical applications, the number of the second cache data is multiple, and the cache data may be generally composed of index information and cache key information based on actual transaction requirements, where the index information is used to index the cache data requested to be queried from the multiple cache data. Based on this, the second cache key information may be extracted from the second cache data. The second cache key information is data obtained after target encoding processing, and the second cache key information comprises value element data and separators.
And D4, determining key element data based on the value element data and/or the separator, and performing coding processing based on the value element data and the key element data to obtain third cache data, wherein the third cache coded data is cache coded data containing key element data in key value pairs.
It is to be understood that the data type of the value element data may be determined, and the key element data may be determined according to the data type of the value element data, the data type of the value element data being strongly correlated with the key element data. For example, if the data type of the value element data "three by three" is usually name, the key element data is the name key element.
It is to be understood that the key element data may be determined according to the delimiter, and in the target encoding process, if different delimiters are used based on different key elements, the key element data corresponding to the delimiter may be determined based on the mapping relationship between the different delimiters and the corresponding key elements.
Optionally, the keyword element data may be determined based on the value element data and the delimiter at the same time, that is, data fitting may be performed on the two types of keyword element data to obtain final keyword element data.
Further, after determining key element data, performing encoding processing based on the value element data and the key element data, in one way, generating third cache data directly based on a combination of the value element data and the key element data, in another way, replacing separators in the second cache key information with the key element data, and generating the third cache data based on data after replacement.
Further, after the service platform generates the third cache data, the third cache data may be sent to the client.
S316: and sending the second cache data to a client so that the client performs target decoding processing on the second cache data to obtain third cache data.
It can be understood that, in response to the data query request of the client, the service platform may not perform local decoding after acquiring the second cache data, so as to reduce the cache management pressure of the service platform. And directly sending the second cache data to the client, thereby indicating the client to perform target decoding processing on the second cache data to obtain third cache data.
In one or more embodiments of the present specification, an execution process of the client performing the target decoding on the second cache data to obtain third cache data is the same as an execution process of the service platform performing the target decoding on the second cache data to obtain the third cache data, and the difference is only that an execution main body is different, and details are not repeated here.
In a specific implementation scenario, the service platform may provide a near-end service to the outside, the near-end service may be used for decoding to solve a phenomenon that the service platform is stressed, and the data query request is directly requested to the near-end service. And (3) the cache query hits, the near-end service performs target decoding processing based on second cache data fed back by the service platform, and if second cache key information in the second cache data is ' Zhaosan # #20 ', the decoding is restored to be { ' name ': zhan ', ' years ': 20}.
In one or more embodiments of the present description, the service platform obtains current access user distribution information by monitoring a cache fluctuation state for hot data and cold data, and dynamically adjusts a cache space of the first data cache space and/or the second data cache space based on the access user distribution information. Based on the fluctuation of the cache amount of the hot and cold data which can be predicted to a certain extent by accessing the user distribution information, the corresponding cache space can be adjusted in advance, the cache hit rate when the data is requested is improved, and the processing pressure of the service platform is reduced; the mode of migrating the cache space of the cache data is avoided, so that the time complexity of cache management and the load of a cache system can be reduced, and the cost of the cache management is saved; in a severe fluctuation state, the platform can automatically and accurately monitor and decide a proper cache elimination mechanism for cache management; and under the condition of insufficient cache space, more data can be cached in the limited cache space by utilizing the encoding and decoding, so that the resource utilization rate of the cache space is improved.
The following describes the cache management apparatus provided in this specification in detail with reference to fig. 5. It should be noted that the cache management apparatus shown in fig. 5 is used for executing the method of the embodiment shown in fig. 1 to 4 of the present application, and for convenience of description, only the relevant portions of the present specification are shown, and specific technical details are not disclosed, please refer to the embodiment shown in fig. 1 to 4 of the present application.
Please refer to fig. 5, which shows a schematic structural diagram of the cache management apparatus of the present specification. The cache management apparatus 1 may be implemented as all or a part of a user terminal by software, hardware, or a combination of both. According to some embodiments, the cache management apparatus 1 includes an information obtaining module 11 and a data processing module 12, and is specifically configured to:
the information acquisition module 11 is configured to monitor a cache fluctuation state for hot data and cold data, and acquire access user distribution information based on the cache fluctuation state;
and the data processing module 12 is configured to perform cache space adjustment on a first data cache space and/or a second data cache space based on the access user distribution information, where the first data cache space is a data storage space for caching the hot data, and the second data cache space is a data cache space for caching the cold data.
Optionally, as shown in fig. 6, the information obtaining module 11 includes:
a data monitoring unit 111 for monitoring a cache fluctuation ratio for hot data and cold data;
an information obtaining unit 112, configured to obtain the access user distribution information if the cache fluctuation ratio is greater than or equal to a fluctuation ratio threshold.
Optionally, the information obtaining module 11 is specifically configured to:
acquiring first user access data aiming at a new user type and second user access data aiming at an old user type;
determining new and old user fluctuation data based on the first user access data and the second user access data, and taking the new and old user fluctuation data as access user distribution information;
the new user type and the old user type are user types divided by access users aiming at the service platform.
Optionally, as shown in fig. 7, the data processing module 12 includes:
a mode determining unit 121, configured to determine a target cache adjustment mode based on the access user distribution information;
the space adjusting unit 122 is configured to perform cache space adjustment on the first data cache space and/or the second data cache space in the target cache adjusting manner.
Optionally, the access user distribution information is fluctuation data of old and new users, and the mode determining unit 121 is specifically configured to:
determining a user traffic fluctuation state for the service platform based on the new and old user fluctuation data;
if the user flow fluctuation state is a severe fluctuation state, determining that a target cache regulation mode is a cache dynamic regulation mode;
and if the user flow fluctuation state is a fluctuation stable state, determining that the target cache adjusting mode is a cache period adjusting mode.
Optionally, the mode determining unit 121 is specifically configured to:
the new and old user fluctuation data comprises user growth and proportion of new and old users;
and if the user growth is larger than an increment threshold and the proportion of the old and new users is larger than a user proportion threshold, determining that the user flow fluctuation state aiming at the service platform is the violent fluctuation state.
Optionally, the target cache adjusting mode is a cache dynamic adjusting mode, and the space adjusting unit 122 is specifically configured to:
and determining a sliding window adjusting area and a target adjusting cache space by adopting the cache dynamic adjusting mode, and adjusting the cache space of the target adjusting cache space based on the sliding window adjusting area, wherein the target adjusting cache space is at least one of the first data cache space and the second data cache space.
Optionally, the space adjusting unit 122 is specifically configured to:
and determining a cache adjusting ratio based on the access user distribution information, and determining a sliding window adjusting area based on the cache adjusting ratio.
Optionally, the space adjusting unit 122 is specifically configured to:
acquiring new and old user fluctuation data and/or cold and hot data fluctuation parameters, and determining a target regulation cache space from a first data cache space and a second data cache space based on the new and old user fluctuation data and/or the cold and hot data fluctuation parameters;
determining a first cache adjustment proportion for a third data cache space;
selecting a first sliding window adjusting area from a third data cache space based on the first cache adjusting proportion, and associating the first sliding window adjusting area with the target adjusting cache space, wherein the third data cache space is a data cache space except the target adjusting cache space in the first data cache space and the second data cache space.
Optionally, the space adjusting unit 122 is specifically configured to:
acquiring new and old user fluctuation data and/or cold and hot data fluctuation parameters, and determining a target regulation cache space from a first data cache space and a second data cache space based on the new and old user fluctuation data and/or the cold and hot data fluctuation parameters;
obtaining a reserved cache space, and determining a second cache adjusting proportion aiming at the reserved cache space; the reserved cache space is a data cache space except the first data cache space and the second data cache space;
and selecting a second sliding window adjusting area from the reserved cache space based on the second cache adjusting proportion, and associating the second sliding window adjusting area with a target adjusting cache space.
Optionally, the apparatus 1 is further configured to:
and adjusting a cache elimination mechanism of the first data cache space and/or the second data cache space based on the target cache adjusting mode.
Optionally, the apparatus 1 is further configured to:
and if the target cache adjusting mode is a cache dynamic adjusting mode, performing cache elimination on the first data cache space by adopting a least recently used cache mechanism, and performing cache elimination on the second data cache space by adopting a least frequently used cache mechanism.
Optionally, as shown in fig. 8, the apparatus 1 includes:
the data caching module 13 is configured to obtain first cache data to be stored in a first data caching space and/or a second data caching space;
the data caching module 13 is further configured to perform target encoding processing on the first cached data to obtain second cached encoded data; the second cache coded data is cache coded data which does not contain key element data in the key value pair.
And carrying out cache storage on the second cache data.
Optionally, the data caching module 13 is further configured to:
determining first cache key information corresponding to the first cache data, wherein the first cache key information comprises key element data and value element data generated based on a key-value pair coding mode;
and determining a separator aiming at the key element data, and carrying out coding processing based on the value element data and the separator to obtain second cache data.
Optionally, the data caching module 13 is further configured to:
responding to a data query request of a client, and acquiring second cache data;
performing target decoding processing on the second cache data to obtain third cache data, and sending the third cache data to a client; or sending the second cache data to a client so that the client performs target decoding processing on the second cache data to obtain third cache data;
and the third cache data is cache coded data containing key element data in key value pairs.
Optionally, the data caching module 13 is further configured to:
determining second cache key information corresponding to the second cache data, wherein the second cache key information comprises value element data and separators;
and determining key element data based on the value element data and/or the separator, and performing encoding processing based on the value element data and the key element data to obtain third cache data.
It should be noted that, when the cache management apparatus provided in the foregoing embodiment executes the cache management method, only the division of each functional module is illustrated by way of example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the embodiments of the cache management apparatus and the cache management method provided in the foregoing embodiments belong to the same concept, and details of implementation processes thereof are shown in the embodiments of the method and are not described herein again.
The above serial numbers are for description only and do not represent the merits of the embodiments.
In one or more embodiments of the present specification, the service platform dynamically adjusts the cache space of the first data cache space and/or the second data cache space based on the access user distribution information by monitoring the cache fluctuation states for hot data and cold data to obtain current access user distribution information. Based on the fluctuation of the cache amount of the cold and hot data which can be predicted to a certain extent by accessing the user distribution information, the corresponding cache space can be adjusted in advance, the cache hit rate when the data is requested is improved, and the processing pressure of the service platform is reduced.
The present specification further provides a computer storage medium, where multiple instructions may be stored in the computer storage medium, where the instructions are suitable for being loaded by a processor and being executed by the cache management method according to the embodiment shown in fig. 1 to fig. 4, and a specific execution process may refer to the specific description of the embodiment shown in fig. 1 to fig. 4, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored in the computer program product, and the at least one instruction is loaded by the processor and executes the cache management method according to the embodiment shown in fig. 1 to 4, where a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 4, and is not described herein again.
Referring to fig. 9, a block diagram of an electronic device according to an exemplary embodiment of the present application is shown. The electronic device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be coupled by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, and the like), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system, including a system based on Android system depth development, an IOS system developed by apple, including a system based on IOS system depth development, or other systems. The data storage area may also store data created by the electronic device during use, such as phone books, audio and video data, chat log data, and the like.
Referring to fig. 10, the memory 120 may be divided into an operating system space, where an operating system is run, and a user space, where native and third-party applications are run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system often cannot timely sense the current application scene of the third-party application program, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, as shown in fig. 11, programs and data stored in the memory 120 may be stored in the memory 120, where the memory 120 may store a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game application, an instant messaging program, a photo beautification program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 12, and the IOS system includes: a Core operating system Layer 420 (Core OS Layer), a Core Services Layer 440 (Core Services Layer), a Media Layer 460 (Media Layer), and a touchable Layer 480 (Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the electronic device. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework illustrated in FIG. 12, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
The input device 130 is used for receiving input commands or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. Touch displays are typically provided on the front panel of an electronic device. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the specification.
In addition, those skilled in the art will appreciate that the configurations of the electronic devices illustrated in the above-described figures do not constitute limitations on the electronic devices, which may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components. For example, the electronic device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In this specification, the execution subject of each step may be the electronic device described above. Optionally, the execution subject of each step is an operating system of the electronic device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this specification.
The electronic device of this specification may further have a display device mounted thereon, and the display device may be various devices that can implement a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink screen, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user may utilize a display device on the electronic device 101 to view information such as displayed text, images, videos, and the like. The electronic device may be a server, a service platform, a smart phone, a tablet computer, a game device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook, a desktop computing device, or the like.
In the electronic device shown in fig. 9, where the electronic device may be a service platform, the processor 110 may be configured to call an application program stored in the memory 120 and specifically perform the following operations:
monitoring a cache fluctuation state aiming at hot data and cold data, and acquiring access user distribution information based on the cache fluctuation state;
and based on the access user distribution information, carrying out cache space adjustment on a first data cache space and/or a second data cache space, wherein the first data cache space is a data storage space for caching the hot data, and the second data cache space is a data cache space for caching the cold data.
In one embodiment, when performing the monitoring of the cache fluctuation states for the hot data and the cold data and acquiring the access user distribution information based on the cache fluctuation states, the processor 110 specifically performs the following operations:
monitoring a cache fluctuation ratio for hot data and cold data;
and if the cache fluctuation ratio is greater than or equal to the fluctuation ratio threshold, acquiring the distribution information of the access users.
In one embodiment, the processor 110, when executing the obtaining of the access user distribution information, performs the following operations:
acquiring first user access data aiming at a new user type and second user access data aiming at an old user type;
determining new and old user fluctuation data based on the first user access data and the second user access data, and taking the new and old user fluctuation data as access user distribution information;
the new user type and the old user type are user types divided by access users aiming at the service platform.
In one embodiment, when performing the cache space adjustment on the first data cache space and/or the second data cache space based on the access user distribution information, the processor 110 performs the following steps:
determining a target cache adjusting mode based on the access user distribution information;
and adjusting the cache space of the first data cache space and/or the second data cache space by adopting the target cache adjusting mode.
In one embodiment, the accessing user distribution information is fluctuation data of old and new users, and the processor 110 performs the following steps when executing the determining of the target cache adjusting party based on the accessing user distribution information:
determining a user traffic fluctuation state for the service platform based on the old and new user fluctuation data;
if the user flow fluctuation state is a severe fluctuation state, determining that a target cache adjusting mode is a cache dynamic adjusting mode;
and if the user flow fluctuation state is a fluctuation stable state, determining that the target cache adjusting mode is a cache period adjusting mode.
In one embodiment, the processor 110, when executing the determining of the user traffic fluctuation status for the service platform based on the old and new user fluctuation data, executes the following steps:
the new and old user fluctuation data comprises user growth and new and old user proportion;
and if the user growth amount is larger than an increment threshold value, and the proportion of the new and old users is larger than a user proportion threshold value, determining that the user flow fluctuation state aiming at the service platform is the severe fluctuation state.
In an embodiment, the target cache adjusting manner is a dynamic cache adjusting manner, and when the processor 110 performs cache space adjustment on the first data cache space and/or the second data cache space by using the target cache adjusting manner, the following steps are performed:
and determining a sliding window adjusting area and a target adjusting cache space by adopting the cache dynamic adjusting mode, and adjusting the cache space of the target adjusting cache space based on the sliding window adjusting area, wherein the target adjusting cache space is at least one of the first data cache space and the second data cache space.
In one embodiment, the processor 110, when executing the determining the sliding window adjustment region, performs the following steps:
and determining a cache adjusting ratio based on the access user distribution information, and determining a sliding window adjusting area based on the cache adjusting ratio.
In an embodiment, when the processor 110 determines the sliding window adjustment region and the target adjustment cache space by using the dynamic cache adjustment manner, and performs cache space adjustment on the target adjustment cache space based on the sliding window adjustment region, the following steps are performed:
acquiring new and old user fluctuation data and/or cold and hot data fluctuation parameters, and determining a target regulation cache space from a first data cache space and a second data cache space based on the new and old user fluctuation data and/or the cold and hot data fluctuation parameters;
determining a first cache adjustment proportion for a third data cache space;
selecting a first sliding window adjusting area from a third data cache space based on the first cache adjusting proportion, and associating the first sliding window adjusting area with the target adjusting cache space, wherein the third data cache space is a data cache space except the target adjusting cache space in the first data cache space and the second data cache space.
In an embodiment, when the processor 110 determines the sliding window adjustment region and the target adjustment cache space by using the dynamic cache adjustment manner, and performs cache space adjustment on the target adjustment cache space based on the sliding window adjustment region, the following steps are performed:
acquiring new and old user fluctuation data and/or cold and hot data fluctuation parameters, and determining a target regulation cache space from a first data cache space and a second data cache space based on the new and old user fluctuation data and/or the cold and hot data fluctuation parameters;
obtaining a reserved cache space, and determining a second cache adjusting proportion aiming at the reserved cache space; the reserved cache space is a data cache space except the first data cache space and the second data cache space;
and selecting a second sliding window adjusting area from the reserved cache space based on the second cache adjusting proportion, and associating the second sliding window adjusting area with a target adjusting cache space.
In one embodiment, when executing the cache management method, the processor 110 further performs the following steps:
and adjusting a cache elimination mechanism of the first data cache space and/or the second data cache space based on the target cache adjusting mode.
In an embodiment, when performing the cache eviction mechanism adjustment on the first data cache space and/or the second data cache space based on the target cache adjustment manner, the processor 110 performs the following steps:
if the target cache adjusting mode is a cache dynamic adjusting mode, performing cache elimination processing on the first data cache space by adopting a least recently used cache mechanism, and performing cache elimination processing on the second data cache space by adopting a least frequently used cache mechanism.
In one embodiment, when executing the cache management method, the processor 110 further performs the following steps:
acquiring first cache data to be stored aiming at a first data cache space and/or a second data cache space;
performing target coding processing on the first cache data to obtain second cache coded data; the second cache coding data is cache coding data which does not contain key element data in the key value pair.
And carrying out cache storage on the second cache data.
In an embodiment, when the processor 110 performs the target encoding process on the first cache data to obtain the second cache data, the following steps are specifically performed:
determining first cache key information corresponding to the first cache data, wherein the first cache key information comprises key element data and value element data generated based on a key-value pair coding mode;
and determining a separator aiming at the key element data, and carrying out coding processing based on the value element data and the separator to obtain second cache data.
In one embodiment, when executing the cache management method, the processor 110 further performs the following steps:
responding to a data query request of a client, and acquiring second cache data;
performing target decoding processing on the second cache data to obtain third cache data, and sending the third cache data to a client; or sending the second cache data to a client so that the client performs target decoding processing on the second cache data to obtain third cache data;
and the third cache data is cache coded data containing key element data in key value pairs.
In an embodiment, when the processor 110 performs the target decoding processing on the second cache data to obtain third cache data, the following steps are specifically performed:
determining second cache key information corresponding to the second cache data, wherein the second cache key information comprises value element data and separators;
determining key element data based on the value element data and/or the separator, and performing encoding processing based on the value element data and the key element data to obtain third cache data
In one or more embodiments of the present specification, the service platform dynamically adjusts the cache space of the first data cache space and/or the second data cache space based on the access user distribution information by monitoring the cache fluctuation states for hot data and cold data to obtain current access user distribution information. Based on the fluctuation of the cache amount of the cold and hot data which can be predicted to a certain extent by accessing the user distribution information, the corresponding cache space can be adjusted in advance, the cache hit rate when the data is requested is improved, and the processing pressure of the service platform is reduced.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a computer to implement the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and should not be taken as limiting the scope of the present application, so that the present application will be covered by the appended claims.

Claims (20)

1. A cache management method is applied to a service platform and comprises the following steps:
monitoring a cache fluctuation state aiming at hot data and cold data, and acquiring access user distribution information based on the cache fluctuation state;
and based on the access user distribution information, carrying out cache space adjustment on a first data cache space and/or a second data cache space, wherein the first data cache space is a data storage space for caching the hot data, and the second data cache space is a data cache space for caching the cold data.
2. The method of claim 1, the monitoring a cache surge state for hot and cold data, obtaining access user distribution information based on the cache surge state, comprising:
monitoring a cache fluctuation ratio for hot data and cold data;
and if the cache fluctuation ratio is greater than or equal to the fluctuation ratio threshold, acquiring the distribution information of the access users.
3. The method of claim 1 or 2, the obtaining access user distribution information comprising:
acquiring first user access data aiming at a new user type and second user access data aiming at an old user type;
determining new and old user fluctuation data based on the first user access data and the second user access data, and taking the new and old user fluctuation data as access user distribution information;
the new user type and the old user type are user types divided by access users aiming at the service platform.
4. The method of claim 1, wherein the adjusting the cache space of the first data cache space and/or the second data cache space based on the access user distribution information comprises:
determining a target cache adjusting mode based on the access user distribution information;
and adjusting the cache space of the first data cache space and/or the second data cache space by adopting the target cache adjusting mode.
5. The method of claim 4, wherein the visiting user profile information is new and old user fluctuation data,
the determining a target cache adjusting mode based on the access user distribution information includes:
determining a user traffic fluctuation state for the service platform based on the new and old user fluctuation data;
if the user flow fluctuation state is a severe fluctuation state, determining that a target cache regulation mode is a cache dynamic regulation mode;
and if the user flow fluctuation state is a fluctuation stable state, determining that the target cache adjusting mode is a cache period adjusting mode.
6. The method of claim 5, the determining a user traffic fluctuation state for the service platform based on the old and new user fluctuation data, comprising:
the new and old user fluctuation data comprises user growth and proportion of new and old users;
and if the user growth is larger than an increment threshold and the proportion of the old and new users is larger than a user proportion threshold, determining that the user flow fluctuation state aiming at the service platform is the violent fluctuation state.
7. The method of claim 4, wherein the target cache adjustment mode is a dynamic cache adjustment mode,
the adjusting the cache space of the first data cache space and/or the second data cache space by adopting the target cache adjusting mode comprises the following steps:
and determining a sliding window adjusting area and a target adjusting cache space by adopting the cache dynamic adjusting mode, and adjusting the cache space of the target adjusting cache space based on the sliding window adjusting area, wherein the target adjusting cache space is at least one of the first data cache space and the second data cache space.
8. The method of claim 7, the determining a sliding window adjustment region, comprising:
and determining a cache adjusting ratio based on the access user distribution information, and determining a sliding window adjusting area based on the cache adjusting ratio.
9. The method according to claim 7, wherein the determining a sliding window adjustment region and a target adjustment buffer space by using the dynamic buffer adjustment manner, and performing buffer space adjustment on the target adjustment buffer space based on the sliding window adjustment region includes:
acquiring new and old user fluctuation data and/or cold and hot data fluctuation parameters, and determining a target regulation cache space from a first data cache space and a second data cache space based on the new and old user fluctuation data and/or the cold and hot data fluctuation parameters;
determining a first cache adjustment proportion for a third data cache space;
selecting a first sliding window adjusting area from a third data cache space based on the first cache adjusting proportion, and associating the first sliding window adjusting area with the target adjusting cache space, wherein the third data cache space is a data cache space except the target adjusting cache space in the first data cache space and the second data cache space.
10. The method according to claim 7, wherein the determining a sliding window adjustment region and a target adjustment buffer space by using the dynamic buffer adjustment mode, and performing buffer space adjustment on the target adjustment buffer space based on the sliding window adjustment region includes:
acquiring new and old user fluctuation data and/or cold and hot data fluctuation parameters, and determining a target regulation cache space from a first data cache space and a second data cache space based on the new and old user fluctuation data and/or the cold and hot data fluctuation parameters;
acquiring a reserved cache space, and determining a second cache adjusting proportion aiming at the reserved cache space; the reserved cache space is a data cache space except the first data cache space and the second data cache space;
and selecting a second sliding window adjusting area from the reserved cache space based on the second cache adjusting proportion, and associating the second sliding window adjusting area with a target adjusting cache space.
11. The method of claim 4, further comprising:
and adjusting a cache elimination mechanism of the first data cache space and/or the second data cache space based on the target cache adjusting mode.
12. The method of claim 11, wherein the adjusting the cache eviction mechanism for the first data cache space and/or the second data cache space based on the target cache adjustment manner comprises:
and if the target cache adjusting mode is a cache dynamic adjusting mode, performing cache elimination on the first data cache space by adopting a least recently used cache mechanism, and performing cache elimination on the second data cache space by adopting a least frequently used cache mechanism.
13. The method of claim 1, further comprising:
acquiring first cache data to be stored aiming at a first data cache space and/or a second data cache space;
performing target coding processing on the first cache data to obtain second cache coded data; the second cache coded data is cache coded data which does not contain key element data in the key value pair;
and carrying out cache storage on the second cache data.
14. The method of claim 13, wherein the target encoding the first buffered data to obtain second buffered data comprises:
determining first cache key information corresponding to the first cache data, wherein the first cache key information comprises key element data and value element data generated based on a key-value pair coding mode;
and determining a separator aiming at the key element data, and carrying out coding processing based on the value element data and the separator to obtain second cache data.
15. The method of claim 13, further comprising:
responding to a data query request of a client, and acquiring second cache data;
performing target decoding processing on the second cache data to obtain third cache data, and sending the third cache data to a client; or sending the second cache data to a client so that the client performs target decoding processing on the second cache data to obtain third cache data;
and the third cache data is cache coded data containing key element data in key value pairs.
16. The method of claim 15, wherein the performing target decoding processing on the second buffered data to obtain third buffered data comprises:
determining second cache key information corresponding to the second cache data, wherein the second cache key information comprises value element data and separators;
and determining key element data based on the value element data and/or the separator, and performing encoding processing based on the value element data and the key element data to obtain third cache data.
17. A cache management apparatus, the apparatus comprising:
the information acquisition module is used for monitoring the cache fluctuation states of hot data and cold data and acquiring the distribution information of the access users based on the cache fluctuation states;
and the data processing module is used for carrying out cache space adjustment on a first data cache space and/or a second data cache space based on the access user distribution information, wherein the first data cache space is a data storage space for caching the hot data, and the second data cache space is a data cache space for caching the cold data.
18. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1 to 16.
19. A computer program product having stored at least one instruction for being loaded by said processor and for performing the method steps according to any of claims 1 to 16.
20. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 16.
CN202210907778.5A 2022-07-29 2022-07-29 Cache management method and device, storage medium and electronic equipment Pending CN115334158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210907778.5A CN115334158A (en) 2022-07-29 2022-07-29 Cache management method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210907778.5A CN115334158A (en) 2022-07-29 2022-07-29 Cache management method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115334158A true CN115334158A (en) 2022-11-11

Family

ID=83919121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210907778.5A Pending CN115334158A (en) 2022-07-29 2022-07-29 Cache management method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115334158A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107426302A (en) * 2017-06-26 2017-12-01 腾讯科技(深圳)有限公司 Access scheduling method, apparatus, system, terminal, server and storage medium
CN107562804A (en) * 2017-08-08 2018-01-09 上海数据交易中心有限公司 Data buffer service system and method, terminal
CN108459821A (en) * 2017-02-21 2018-08-28 中兴通讯股份有限公司 A kind of method and device of data buffer storage
CN109344092A (en) * 2018-09-11 2019-02-15 天津易华录信息技术有限公司 A kind of method and system improving cold storing data reading speed
CN110442309A (en) * 2019-07-24 2019-11-12 广东紫晶信息存储技术股份有限公司 A kind of cold and hot method for interchanging data and system based on optical storage
CN113688160A (en) * 2021-09-08 2021-11-23 北京沃东天骏信息技术有限公司 Data processing method, processing device, electronic device and storage medium
CN113742131A (en) * 2020-05-29 2021-12-03 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for storage management

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108459821A (en) * 2017-02-21 2018-08-28 中兴通讯股份有限公司 A kind of method and device of data buffer storage
US20210133103A1 (en) * 2017-02-21 2021-05-06 Zte Corporation Data caching method and apparatus
CN107426302A (en) * 2017-06-26 2017-12-01 腾讯科技(深圳)有限公司 Access scheduling method, apparatus, system, terminal, server and storage medium
CN107562804A (en) * 2017-08-08 2018-01-09 上海数据交易中心有限公司 Data buffer service system and method, terminal
CN109344092A (en) * 2018-09-11 2019-02-15 天津易华录信息技术有限公司 A kind of method and system improving cold storing data reading speed
CN110442309A (en) * 2019-07-24 2019-11-12 广东紫晶信息存储技术股份有限公司 A kind of cold and hot method for interchanging data and system based on optical storage
CN113742131A (en) * 2020-05-29 2021-12-03 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for storage management
CN113688160A (en) * 2021-09-08 2021-11-23 北京沃东天骏信息技术有限公司 Data processing method, processing device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN109684358B (en) Data query method and device
CN111240837B (en) Resource allocation method, device, terminal and storage medium
US10698559B2 (en) Method and apparatus for displaying content on same screen, and terminal device
US9712854B2 (en) Cost-aware cloud-based content delivery
US10331769B1 (en) Interaction based prioritized retrieval of embedded resources
US20100087179A1 (en) Device, system and method for providing distributed online services
US9798827B2 (en) Methods and devices for preloading webpages
US9374244B1 (en) Remote browsing session management
CN111447107B (en) Network state determining method and device, storage medium and electronic equipment
CN111124668B (en) Memory release method, memory release device, storage medium and terminal
WO2014001927A1 (en) Incremental preparation of videos for delivery
US9722851B1 (en) Optimized retrieval of network resources
Xinogalos et al. Recent advances delivered by HTML 5 in mobile cloud computing applications: a survey
WO2019047708A1 (en) Resource configuration method and related product
CN113117326A (en) Frame rate control method and device
CN110572815A (en) Network access method, device, storage medium and terminal
CN115334158A (en) Cache management method and device, storage medium and electronic equipment
CN115328725A (en) State monitoring method and device, storage medium and electronic equipment
CN107426114A (en) Resource allocation methods and system
CN111770510A (en) Network experience state determination method and device, storage medium and electronic equipment
US10693991B1 (en) Remote browsing session management
CN114764362A (en) Virtual resource obtaining method and device, electronic equipment and storage medium
CN111818509A (en) Resource conversion method, device and equipment
EP4060592A1 (en) Method and system for acquiring content, user terminal, and content server
CN112612487B (en) Application installation method, device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination