CN109739516B - Cloud cache operation method and system - Google Patents

Cloud cache operation method and system Download PDF

Info

Publication number
CN109739516B
CN109739516B CN201811643396.6A CN201811643396A CN109739516B CN 109739516 B CN109739516 B CN 109739516B CN 201811643396 A CN201811643396 A CN 201811643396A CN 109739516 B CN109739516 B CN 109739516B
Authority
CN
China
Prior art keywords
cache
data
instance
unified
information data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811643396.6A
Other languages
Chinese (zh)
Other versions
CN109739516A (en
Inventor
刘威
冷迪
黄建华
陈瑞
吕志宁
庞宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Bureau Co Ltd
Shenzhen Comtop Information Technology Co Ltd
Original Assignee
Shenzhen Power Supply Bureau Co Ltd
Shenzhen Comtop Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Bureau Co Ltd, Shenzhen Comtop Information Technology Co Ltd filed Critical Shenzhen Power Supply Bureau Co Ltd
Priority to CN201811643396.6A priority Critical patent/CN109739516B/en
Publication of CN109739516A publication Critical patent/CN109739516A/en
Application granted granted Critical
Publication of CN109739516B publication Critical patent/CN109739516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a cloud cache operation method, which comprises the following steps: collecting monitoring information data of the cache instance according to the set frequency and state information data of the cache instance; providing a unified cache API interface package on the cloud platform server, and realizing multi-level cache of monitoring information data of a cache instance and status information data of the cache instance through the unified cache API interface package, wherein the cloud platform server is provided with a Redis database which has the interface of the cache API of the Redis database; defining a cache data elimination strategy, a cache space recovery strategy, a cache capacity control strategy and a cache persistence strategy, and managing the strategies. The invention can reduce the complexity and the management cost of the distributed cache, enhance the stability and the configurability of the distributed cache, reduce the influence of the scale and the replication of the cluster on the performance and improve the usability of the API.

Description

Cloud cache operation method and system
Technical Field
The present invention relates to the field of network technologies, and in particular, to a cloud cache operation method and system.
Background
With the continuous development of software system platform services, cloud computing needs to provide flexible and massive resources for services, the requirement of caching data storage is larger and larger, the cached data in the traditional system is more than T, the data are currently intensively deployed on a shared memory of a small computer, the data scale reaches the memory bottleneck of a single host computer, the subsequent capacity expansion cost is high, the distributed caching technology is introduced into the cloud platform, and the following key problems are solved:
1. the data is deployed in the shared memory of the small-sized computer, the memory capacity reaches the upper limit, and the memory capacity is small;
2. data real-time expansion requirements of services are continuously increased, and the system is elastic and flexible;
3. large-scale concurrent data I/O performance bottleneck, large load pressure of database, low transaction throughput rate and long system delay
4. The complexity and the management cost of the distributed cache are reduced;
5. the stability and the configurability of the distributed cache are enhanced;
6. reducing the impact of cluster size and replication on performance;
7. the usability of the API (Application Programming Interface ) is improved, and the client development efficiency is improved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a cloud cache operation method and a cloud cache operation system, which reduce the complexity and the management cost of a distributed cache, enhance the stability and the easy configuration of the distributed cache, reduce the influence of the scale and the replication of a cluster on the performance, and improve the usability of an API.
The invention provides an operation method of cloud cache, which comprises the following steps:
collecting monitoring information data of the cache instance according to the set frequency and state information data of the cache instance;
providing a unified cache API interface package on a cloud platform server externally, and realizing multi-level cache of monitoring information data of a cache instance and status information data of the cache instance through the unified cache API interface package, wherein the cloud platform server is provided with a Redis database, the unified cache API interface package realizes an interface of a cache API of the Redis database, and the unified cache API interface also integrates a third party cache component and shields the difference between the third party cache components;
defining a cache data elimination strategy, a cache space recovery strategy, a cache capacity control strategy and a cache persistence strategy, and managing the cache data elimination strategy, the cache space recovery strategy, the cache capacity control strategy and the cache persistence strategy.
Preferably, the monitoring information of the cache instance includes hit rate, capacity, number of reads, number of writes, and number of deleted objects of the cache instance.
Preferably, the multi-level cache of the monitoring information data of the cache instance and the status information data of the cache instance is realized through a unified cache API interface packet, which specifically comprises:
and realizing local caching and/or remote caching of the monitoring information data of the cache instance and the state information data of the cache instance through the unified cache API interface packet.
Preferably, the method further comprises the following steps:
after the local cache and the remote cache of the data are realized through the unified cache API interface package, deleting the data corresponding to the local cache when the data of the remote cache are changed.
Preferably, the cache data elimination policy includes: the method comprises the steps of supporting the persistent validity of the cache data, supporting the validity of the cache data in a set time period and supporting the elimination of the cache data in a preset idle time.
The invention provides an operation system of cloud cache, comprising:
the cache frame monitoring module is used for collecting monitoring information data of the cache instance and state information data of the cache instance according to the set frequency;
the cache framework core module is used for providing a unified cache API interface package on the cloud platform server outwards, and realizing multi-level cache of monitoring information data of a cache instance and status information data of the cache instance through the unified cache API interface package, wherein the cloud platform server is provided with a Redis database, the unified cache API interface package realizes the interface of a cache API of the Redis database, and the unified cache API interface is also integrated with a third party cache component and shields the difference between the third party cache components;
the cache framework policy configuration module is used for defining a cache data elimination policy, a cache space recovery policy, a cache capacity control policy and a cache persistence policy, and managing the cache data elimination policy, the cache space recovery policy, the cache capacity control policy and the cache persistence policy.
Preferably, the monitoring information of the cache instance includes hit rate, capacity, number of reads, number of writes, and number of deleted objects of the cache instance.
Preferably, the cache framework core module is further configured to implement local cache and/or remote cache of monitoring information data of a cache instance and status information data of the cache instance through the unified cache API packet.
Preferably, the method further comprises:
and the cache data coordination module is used for deleting the data corresponding to the local cache when the data of the remote cache changes after the local cache and the remote cache of the data are realized through the unified cache API interface packet.
Preferably, the cache data elimination policy includes: the method comprises the steps of supporting the persistent validity of the cache data, supporting the validity of the cache data in a set time period and supporting the elimination of the cache data in a preset idle time.
The implementation of the invention has the following beneficial effects: the method and the system provided by the invention can better use the cache, reduce the complexity and the management cost of the distributed cache, enhance the stability and the configurability of the distributed cache, reduce the influence of the scale and the replication of the cluster on the performance, improve the usability of the API and improve the development efficiency of the client.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an operation method of cloud cache provided by the invention.
Fig. 2 is a schematic block diagram of an operating system of the cloud cache provided by the present invention.
FIG. 3 is a schematic diagram of the overall cache framework provided by the present invention.
Detailed Description
The invention provides a cloud cache operation method, as shown in fig. 1, which comprises the following steps:
the monitoring information data of the cache instance and the state information data of the cache instance are collected according to a set frequency (for example, the frequency can be adjusted after the cache instance is collected once in a default 30 seconds), so that the cache instance is monitored.
Providing a unified cache API (Application Programming Interface ) interface package on the cloud platform server, and realizing multi-level cache of monitoring information data of a cache instance and status information data of the cache instance through the unified cache API interface package, wherein the cloud platform server is provided with a Redis database, the unified cache API interface package realizes an interface of a cache API of the Redis database (wherein the interface of the cache API of the Redis database comprises an MBean interface of a JCche and expands a capacity method), and the unified cache API interface also integrates a third party cache component and shields the difference between the third party cache components; the unified cache API interface package is expanded on the basis of the JCche interface, touch, batch and asynchronous execution interfaces are provided on the basis, java client-side Jeddis of Redis is used, and Redis template in spring-data-rediss is used for calling and packaging the Jeddis, so that codes of an adaptation layer are simplified.
Defining a cache data elimination strategy, a cache space recovery strategy, a cache capacity control strategy and a cache persistence strategy, and managing the cache data elimination strategy, the cache space recovery strategy, the cache capacity control strategy and the cache persistence strategy.
The above data Caching based on the cache API packet is provided by way of a CacheProvider, which is generally used by Caching. Typically the cached SPI implementation is provided in the form of a service and is configured in the/META-INF/services/java cache.
In the above multi-level cache, partition cache may be performed in each level of cache according to the requirement.
The above-mentioned cache instance is monitored, the cache instance can be monitored in a partition mode, the cache instance can be monitored by partition statistical data, and particularly the monitoring can be performed in a JMX mode during operation.
Further, the monitoring information of the cache instance includes life cycle data of the object of the cache instance, such as hit rate, capacity, number of reads, number of writes, and number of deleted objects of the cache instance. Each cached data entry has a defined validity period, and once the validity period is exceeded, the data entry is inaccessible, updated and deleted, and the cache validity period can be set.
Preferably, the method further comprises the steps of: the cache data of a certain or whole cache instance object can be clear according to the user instruction to clear the cache data.
Further, the multi-level cache of the monitoring information data of the cache instance and the status information data of the cache instance is realized through a unified cache API interface packet, specifically:
and realizing local caching and/or remote caching of the monitoring information data of the cache instance and the state information data of the cache instance through a unified cache API interface packet.
Further, the operation method of the cloud cache further comprises the following steps:
if the consistency of the local cache and the remote cache is set: after the local cache and the remote cache of the data are realized through the unified cache API interface package, when the data of the remote cache are changed, for example, changed, deleted or expired, the data corresponding to the local cache are deleted. If the consistency of the local cache and the remote cache is not set, after the local cache and the remote cache of the data are realized through the unified cache API interface package, the data corresponding to the local cache cannot be deleted when the data of the remote cache are changed.
The caching and deletion of data must support a default coherency model, meaning that the visibility of the multithreaded access cache to modifications is ensured when concurrent cache modifications occur. When using the default coherency model, most caches operate similarly to a Cache with a lock mechanism for the corresponding key. When a cache operation fetches an exclusive read or write lock on a key, other subsequent operations to the key will be blocked until the lock is released.
The return value of the operation of some caches is the most recent value, which may be either the old or the new value when an entry is updated concurrently, depending on which value the implementation returns.
Some operations will perform an update if the current state matches a given parameter. Multiple threads invoking these methods are free to complete updates as if they share a lock.
The cache API of Java uses a large number of Java flooding defined by JSR-14 and can develop compile-time type safe applications.
Nonetheless, compile-time type security does not guarantee type correctness at the application runtime using the cache. For some cached topologies, especially those that store or access cached objects across Java process boundaries, java runtime type information erases and cannot obtain and transfer generic type information, which may make the type of cache operation unsafe for the application. The cache should always be careful to ensure that the cache configuration uses the appropriate key and value types for type checking if necessary.
The cache data elimination strategy comprises the following steps: the method comprises the steps of supporting the persistent validity of the cache data, supporting the validity of the cache data in a set time period and supporting the elimination of the cache data in a preset idle time. Here, the default idle time is 30 minutes.
The cache space reclamation policy when the cache capacity is exceeded supports three reclamation policies, namely LRU (Least Recently Used ) algorithm, LFU (Least Frequently Used) algorithm, obsolete data according to historical access frequency, FIFO (First Input First Output, first in first out) algorithm, default LRU algorithm.
Supporting cache persistence, defaulting not to be persistent; executing in a persistent asynchronous mode; the storage mode supports the file and database modes.
And supporting the updating and synchronizing strategies of the self-defined local cache and the remote cache.
The method provided by the invention can further comprise the steps of: the upper size limit of the cache instance object is controlled (dynamically adjustable) and is not controlled by default. And when the upper limit is exceeded, data caching is not performed, and meanwhile, a warning log is recorded.
The present invention also provides a cloud cache running system, as shown in fig. 2, which includes: the system comprises a cache frame monitoring module 1, a cache frame core module 2 and a cache frame policy configuration module 3.
The cache frame monitoring module 1 is used for collecting monitoring information data of a cache instance and state information data of the cache instance according to a set frequency so as to monitor the cache instance.
The cache framework core module 2 is configured to provide a unified cache API interface package on the cloud platform server, and implement multi-level cache of monitoring information data of a cache instance and status information data of the cache instance through the unified cache API interface package, where the cloud platform server is provided with a dis database, the unified cache API interface package implements an interface of a cache API of the dis database, and the unified cache API interface further integrates the third party cache component 4 and shields differences between the third party cache components 4.
The cache frame policy configuration module 3 is configured to define a cache data elimination policy, a cache space reclamation policy, a cache capacity control policy, and a cache persistence policy, and manage the cache data elimination policy, the cache space reclamation policy, the cache capacity control policy, and the cache persistence policy.
A schematic diagram of the structure of the cache frame is shown in fig. 3. The cache interface layer is mainly used for expanding the JCache, abstract configuration and monitoring interfaces. The cache adaptation layer is mainly used for adapting interfaces, configuring and monitoring caches, and can configure local caches, remote caches and both local caches and remote caches.
Further, the monitoring information of the cache instance includes hit rate, capacity, number of reads, number of writes, and number of deleted objects of the cache instance.
The cache frame core module 2 is further configured to implement local caching and/or remote caching of the monitoring information data of the cache instance and the status information data of the cache instance through a unified cache API interface packet.
Further, the running system of the cloud cache further comprises: and a cache data coordination module.
The cache data coordination module is used for deleting the data corresponding to the local cache when the data of the remote cache changes after the local cache and the remote cache of the data are realized through the unified cache API interface package.
The cache data elimination strategy comprises the following steps: the method comprises the steps of supporting the persistent validity of the cache data, supporting the validity of the cache data in a set time period and supporting the elimination of the cache data in a preset idle time.
The method and the system provided by the invention can better use the cache, reduce the complexity and the management cost of the distributed cache, enhance the stability and the configurability of the distributed cache, reduce the influence of the scale and the replication of the cluster on the performance, improve the usability of the API and improve the development efficiency of the client.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (8)

1. The operation method of the cloud cache is characterized by comprising the following steps of:
collecting monitoring information data of the cache instance according to the set frequency and state information data of the cache instance;
providing a unified cache API interface package on a cloud platform server externally, and carrying out multi-level cache on monitoring information data of cache instances and status information data of the cache instances through the unified cache API interface package, wherein the cloud platform server is provided with a Redis database, the unified cache API interface package integrates an interface of a cache API of the Redis database, and the unified cache API interface package also integrates a third party cache component and shields differences among the third party cache components; the monitoring information of the cache instance comprises hit rate, capacity, reading times, writing times and the number of deleted objects of the cache instance;
defining a cache data elimination strategy, a cache space recovery strategy, a cache capacity control strategy and a cache persistence strategy, and managing the cache data elimination strategy, the cache space recovery strategy, the cache capacity control strategy and the cache persistence strategy.
2. The cloud cache operation method according to claim 1, wherein the monitoring information data of the cache instance and the state information data of the cache instance are cached in multiple levels through a unified cache API packet, specifically:
and carrying out local caching and/or remote caching on the monitoring information data of the cache instance and the state information data of the cache instance through the unified cache API interface packet.
3. The method of operating a cloud cache according to claim 2, further comprising the steps of:
after the local cache and the remote cache of the data are carried out through the unified cache API interface package, deleting the data corresponding to the local cache when the data of the remote cache are changed.
4. The method for operating a cloud cache according to claim 1, wherein the cache data elimination policy includes: the method comprises the steps of supporting the persistent validity of the cache data, supporting the validity of the cache data in a set time period and supporting the elimination of the cache data in a preset idle time.
5. An operating system of a cloud cache, comprising:
the cache frame monitoring module is used for collecting monitoring information data of the cache instance and state information data of the cache instance according to the set frequency;
the cache framework core module is used for providing a unified cache API interface package on the cloud platform server outwards, and carrying out multi-level cache on monitoring information data of cache instances and state information data of the cache instances through the unified cache API interface package, wherein the cloud platform server is provided with a Redis database, the unified cache API interface package integrates the interfaces of the cache APIs of the Redis database, and the unified cache API interface package also integrates a third party cache component and shields the difference between the third party cache components; the monitoring information of the cache instance comprises hit rate, capacity, reading times, writing times and the number of deleted objects of the cache instance;
the cache framework policy configuration module is used for defining a cache data elimination policy, a cache space recovery policy, a cache capacity control policy and a cache persistence policy, and managing the cache data elimination policy, the cache space recovery policy, the cache capacity control policy and the cache persistence policy.
6. The cloud cache running system according to claim 5, wherein the cache framework core module is further configured to perform local cache and/or remote cache of monitoring information data of a cache instance and status information data of the cache instance through the unified cache API packet.
7. The cloud cached running system of claim 6, further comprising:
and the cache data coordination module is used for deleting the data corresponding to the local cache when the data of the remote cache changes after the local cache and the remote cache of the data are carried out through the unified cache API packet.
8. The cloud cache operating system of claim 6, wherein the cache data elimination policy comprises: the method comprises the steps of supporting the persistent validity of the cache data, supporting the validity of the cache data in a set time period and supporting the elimination of the cache data in a preset idle time.
CN201811643396.6A 2018-12-29 2018-12-29 Cloud cache operation method and system Active CN109739516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811643396.6A CN109739516B (en) 2018-12-29 2018-12-29 Cloud cache operation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811643396.6A CN109739516B (en) 2018-12-29 2018-12-29 Cloud cache operation method and system

Publications (2)

Publication Number Publication Date
CN109739516A CN109739516A (en) 2019-05-10
CN109739516B true CN109739516B (en) 2023-06-20

Family

ID=66362640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811643396.6A Active CN109739516B (en) 2018-12-29 2018-12-29 Cloud cache operation method and system

Country Status (1)

Country Link
CN (1) CN109739516B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112269831A (en) * 2020-10-27 2021-01-26 广州助蜂网络科技有限公司 High-performance mass data synchronization method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102227121A (en) * 2011-06-21 2011-10-26 中国科学院软件研究所 Distributed buffer memory strategy adaptive switching method based on machine learning and system thereof
CN104202424A (en) * 2014-09-19 2014-12-10 中国人民财产保险股份有限公司 Method for extending cache by software architecture

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8266392B2 (en) * 2007-08-31 2012-09-11 Red Hat, Inc. Cache access mechanism
CN104361030A (en) * 2014-10-24 2015-02-18 西安未来国际信息股份有限公司 Distributed cache architecture with task distribution function and cache method
CN105049530B (en) * 2015-08-24 2018-05-25 用友网络科技股份有限公司 A variety of distributed cache systems from adaptive device and method
US10013501B2 (en) * 2015-10-26 2018-07-03 Salesforce.Com, Inc. In-memory cache for web application data
US9990400B2 (en) * 2015-10-26 2018-06-05 Salesforce.Com, Inc. Builder program code for in-memory cache
CN107087012A (en) * 2016-02-15 2017-08-22 山东华平信息科技有限公司 Medical treatment & health prevention and control cloud platform and method based on mobile terminal
US10007607B2 (en) * 2016-05-31 2018-06-26 Salesforce.Com, Inc. Invalidation and refresh of multi-tier distributed caches
CN108183961A (en) * 2018-01-04 2018-06-19 中电福富信息科技有限公司 A kind of distributed caching method based on Redis
CN108334561A (en) * 2018-01-05 2018-07-27 深圳供电局有限公司 A kind of cross-site remote copy implementation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102227121A (en) * 2011-06-21 2011-10-26 中国科学院软件研究所 Distributed buffer memory strategy adaptive switching method based on machine learning and system thereof
CN104202424A (en) * 2014-09-19 2014-12-10 中国人民财产保险股份有限公司 Method for extending cache by software architecture

Also Published As

Publication number Publication date
CN109739516A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
US10176057B2 (en) Multi-lock caches
US7516277B2 (en) Cache monitoring using shared memory
EP2478442B1 (en) Caching data between a database server and a storage system
EP1805630B1 (en) Cache eviction
EP3507694B1 (en) Message cache management for message queues
US7047387B2 (en) Block cache size management via virtual memory manager feedback
US9229869B1 (en) Multi-lock caches
US20110167239A1 (en) Methods and apparatuses for usage based allocation block size tuning
WO2020181810A1 (en) Data processing method and apparatus applied to multi-level caching in cluster
US8621143B2 (en) Elastic data techniques for managing cache storage using RAM and flash-based memory
US20160313920A1 (en) System and method for an accelerator cache and physical storage tier
CN113010479A (en) File management method, device and medium
CN104376096A (en) Method for asynchronous updating based on buffer area
CN109739516B (en) Cloud cache operation method and system
US8341368B2 (en) Automatic reallocation of structured external storage structures
US9213673B2 (en) Networked applications with client-caching of executable modules
US11269784B1 (en) System and methods for efficient caching in a distributed environment
WO2017127312A1 (en) Versioned records management using restart era
US7251660B2 (en) Providing mappings between logical time values and real time values in a multinode system
US11775527B2 (en) Storing derived summaries on persistent memory of a storage device
US10691615B2 (en) Client-side persistent caching framework
CN114896281A (en) Data processing method and system and electronic equipment
Adya et al. Fragment reconstruction: Providing global cache coherence in a transactional storage system
CN113742381B (en) Cache acquisition method, device and computer readable medium
CN117131088A (en) Menu permission query method and device based on multistage distributed cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant