CN113722363A - Cache public component and implementation, installation and operation method thereof - Google Patents

Cache public component and implementation, installation and operation method thereof Download PDF

Info

Publication number
CN113722363A
CN113722363A CN202110953485.6A CN202110953485A CN113722363A CN 113722363 A CN113722363 A CN 113722363A CN 202110953485 A CN202110953485 A CN 202110953485A CN 113722363 A CN113722363 A CN 113722363A
Authority
CN
China
Prior art keywords
cache
data
interface
level
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110953485.6A
Other languages
Chinese (zh)
Other versions
CN113722363B (en
Inventor
闫文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Civil Aviation Southwest Kaiya Co ltd
Original Assignee
Chengdu Civil Aviation Southwest Kaiya Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Civil Aviation Southwest Kaiya Co ltd filed Critical Chengdu Civil Aviation Southwest Kaiya Co ltd
Priority to CN202110953485.6A priority Critical patent/CN113722363B/en
Publication of CN113722363A publication Critical patent/CN113722363A/en
Application granted granted Critical
Publication of CN113722363B publication Critical patent/CN113722363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a cache public component and an implementation method, an installation method and an operation method thereof, wherein the implementation method comprises the following steps: s100, comment declaration and parameter declaration format processing; s200 annotating and analyzing configuration; s300, first-level cache encapsulation: packaging an API of Redis of the distributed cache, and injecting application during loading by using spring dynamic configuration to realize adaptation of a third-party cache; s400, packaging of a second-level cache: the method comprises the steps that local cache is packaged by using a structure of a map key value pair to obtain a map structure, the structure of the map is partitioned in the packaging, and an automatic data clearing mechanism is introduced and used for automatically clearing data when the capacity is full or the data cache time is expired; s500, visual realization: vue is used to implement a front-end visualization interface component that presents currently cached content in the form of a tree list. The scheme ensures that the cache access has no network request overhead, has quick access, can prevent the cache from penetrating, and has better universality and expansibility.

Description

Cache public component and implementation, installation and operation method thereof
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a cache common component and an implementation, installation and operation method thereof.
Background
Although many open-source cache components are available, the cache components are not added into a project to play a role in diversified requirements, for example, redis is generally integrated into a spring environment to solve the problem of data caching in a distributed manner, but cannot solve the risks of avalanche caused by overdue key values or brought by factors such as network downtime and server downtime of redis caused by the redis self, and particularly in a high-concurrency scene, the main service and database connection can be directly filled, response delay occurs, and even finally the risk of machine being dragged and dropped due to resource exhaustion is caused.
Meanwhile, related caching technologies are also provided for the database, the database connection middleware and the like at present, but the use expectation cannot be achieved at all, for example, the mysql database has a Query Cache technology, but the Query Cache technology can be cleared as long as a table is updated, the quick response and expansion of some compensation change data access cannot be achieved in an actual environment, mysql is limited in self connection resources and network resources, the technology architecture access link is the lowest layer, and the caching is reasonable according to the architecture.
Therefore, in addition to the distributed cache, a second level local cache should be provided for support to prevent cache penetration and loss of service capability of the distributed cache. But the service itself can use the cache and encounter the situation that the diversified requirements cannot be met, such as: the requirements such as expiration date and automatic deletion cannot be set according to the service requirement pertinence; meanwhile, the original application scene is searched and the cache is cleared through an operation and maintenance tool of a third party, the requirement under the emergency situation is difficult to meet, if the database is inconsistent with the cache, the problem of the troubleshooting needs to be solved through operation and maintenance or other operation and maintenance media are directly logged in for carrying out the troubleshooting in a command word mode, and the operation and maintenance difficulty is greatly increased.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a cache common component and a method for implementing, installing and operating the same, wherein the cache access has no network request overhead, is fast to access, can prevent cache from penetrating, and has better universality and expansibility.
In order to realize the purpose of the invention, the following scheme is adopted:
a cache common component implementation method comprises the following steps:
s100, comment declaration and parameter declaration format processing:
carrying out self-defined annotation distributed cache @ DistributeCache through an @ interface class of java, defining annotation use at an interface function inlet and explaining default starting of cache service, and defining configuration parameters of current annotation in the @ interface class of java, wherein the configuration parameters comprise time parameters, control parameters and unique keys;
the time parameter comprises self-defined expiration time, and if the user does not configure the self-defined event, the default expiration time is started; the self-defined expiration time or the default expiration time is used for defining the expiration time in the first-level cache and the expiration time in the second-level cache;
the control parameters include interface level control parameters and global configuration parameters. Control parameters at the interface level, such as: the open function is declared at the interface, so that the current interface can be flexibly enabled or not enabled to be set by the cache, and later-stage test and maintenance are facilitated; global configuration parameters are configured in a configuration file of an application, a program is assigned to globalsopene after being started and loaded, the priority of the globalsopene is highest, if global configuration exists, the ispopene at an interface cannot regenerate, and an interface needing a cache function in the current application can be opened or closed by the global cache configuration. globalisOpen is the configuration of a total configuration file in an application and is used as a global cache configuration with the highest priority to open or close a cache of an interface function using a cache annotation distributed cache @ Distributecache in the current application;
the unique key is used for searching a corresponding cache value in the map structure storage of the cache;
the parameter declaration format is divided in 'for adapting to the structure in Redis, and the background program is also divided in' for;
s200, annotation analysis configuration:
the automatic loading of the tangent classes is realized through the @ component of Spring, the tangent classes are declared by combining the @ Aspect, so that the cut-in of the interface function currently containing the cache annotation is realized, and a pre-configured analysis interface in a public processing module is called to obtain configuration parameters in the annotation declaration;
the public processing module is used for obtaining interface declaration parameters, entering an analysis process, obtaining and analyzing defined annotation parameters of an interface in the analysis process, respectively searching data in the primary cache and the secondary cache according to a unique key, if data belonging to the validity period is found in the primary cache and the secondary cache, directly returning the data to a client called by the interface, otherwise, executing the own data searching logic of the original interface, and before returning a result, calling a storage interface of the primary cache or the secondary cache to store the data into the primary cache or the secondary cache and setting the validity period of the cache; the time of validity is judged according to the self-defined expiration time;
s300, first-level cache encapsulation: packaging an API of Redis of the distributed cache, and injecting application during loading by using spring dynamic configuration to realize adaptation of a third-party cache;
s400, secondary cache encapsulation: using a structure of map key value pairs to realize encapsulation of a local cache to obtain a map structure, partitioning the structure of the map in the encapsulation, and introducing an automatic data clearing mechanism, wherein the automatic data clearing mechanism is realized by using an LRU algorithm and is used for automatically clearing data when the capacity is full or the data cache time expires;
s500, visual realization: vue is used to implement a front-end visualization interface component that presents currently cached content in the form of a tree list.
Further, the automatic clearing of data at capacity fullness or expiration of data buffering time is realized by using LRU algorithm, which comprises the following steps:
performing data caching in the container in a queue form;
sequencing the data cached in the queue according to the calling time, and placing the cached data which is closest to the current time from the calling time at the head position of the queue;
when the capacity is full, automatically clearing the cache data from the tail part of the queue according to a preset proportion, wherein the proportion refers to the proportion of the cache data needing to be cleared in the whole queue;
and directly deleting the cached data with the expired data caching time.
A cache common component obtained by a cache common component implementation method comprises the following steps:
the first-level cache is used as a first-layer cache framework, is realized by packaging an API (application program interface) of Redis of the distributed cache, and is injected into an application during loading by utilizing spring dynamic configuration in the first-level cache so as to realize the adaptation of the first-level cache to a third-party cache;
the second-level cache is used for realizing the encapsulation of the local cache through a map key value pair structure, and the map structure is provided with partitions; the secondary cache has a data automatic clearing function for performing data automatic clearing when the capacity is full or the data cache time expires, the data automatic clearing function is realized by using an LRU algorithm; the key in the structure of the map is a unique cached identifier, the value of the key is a cached object structure body, and the object structure body comprises specific data information and expiration time data of the current cached object and is used for judging whether the current cached object is due or not during each access;
the visualization component is realized by using vue and is used for displaying the content of the current cache in a form of a tree list, the first-level menu of the visualization component is a distributed cache and a local cache respectively, the second-level menu is the only key of the current cache, and the visualization component can display the specific stored value and the valid period content in response to clicking on the only key; the menu of the visual component provides a query and clearing function and is used for searching and clearing Redis and local cache according to input query content.
An installation method of a cache common component comprises the following steps:
introducing a component dependency package in the pom file;
opening a cut surface @ EnableAspectJAutoProxy support in a starting class;
and adding configuration notes at the entrance of the interface needing configuration.
An operation method of a cache common component comprises the following steps:
receiving an external request reaching an application layer;
analyzing a specific response service cache annotation called by a current external request, acquiring a query parameter received by a current response service and a unique key parameter, a cache expiration parameter, whether to start a first-level or second-level cache parameter defined in a function annotation, and packaging into a cache object;
and searching whether the unique key of the current request exists in the first-level cache:
if the unique key is found in the first-level cache and the current cache object is in the valid period, returning the current cache object;
if the unique key is not found in the first-level cache, searching in a second-level cache:
if the unique key is found in the secondary cache and the current cache object is in the valid period, returning the current cache object;
if the unique key is not found in the second-level cache or the unique key is found in the first-level cache/the second-level cache, but the current cache object is not in the validity period, which indicates that the data in the cache is unavailable, the database query processing logic of the response service needs to be executed, after the database query logic of the response service is executed to obtain the data, the data is firstly stored in the first-level cache or the second-level cache, and then the data is finally returned to the client for requesting.
After the cache object is found in the secondary cache through the unique key, the expiration time parameter in the current cache object is checked and compared with the current time to judge whether the current cached data is expired, if so, the cached data is invalid and cannot be returned to the external requester directly. The latest data in the database needs to be inquired
The invention has the beneficial effects that:
1. the first-level cache provides cache support by packaging powerful Redis, and has excellent performance, good universality and expansibility; the second-level cache is the environment where the application is located, the access is faster, no network request overhead exists, the situation that the Redis continues to carry out service support under the condition that the service capability of the Redis is lost can be prevented, cache penetration is prevented, and the like
2. The invention can be flexibly configured, releases the cache from the service code and facilitates the code maintenance in the future
3. The visualization operation is convenient to deal with the phenomenon that cache and actual storage are inconsistent because of network or downtime in daily life, and observation and processing can be assisted.
Drawings
Fig. 1 shows a flow chart of an implementation method of caching common components according to an embodiment of the present application.
Fig. 2 shows a flow chart of an operation method of a cache common component according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings, but the described embodiments of the present invention are a part of the embodiments of the present invention, not all of the embodiments of the present invention.
One aspect of the present embodiment provides a method for implementing a cache common component, as shown in fig. 1, including the following steps:
s100, comment declaration and parameter declaration format processing
The annotation parameter states: the method is characterized in that a custom annotation @ DistributeCache is realized by directly using the @ interface class of java, the annotation is defined at an interface function inlet, and the default cache service starting is explained. And defining current annotation configuration parameters in the annotation class, wherein the parameters are default expiration time, self-defined expiration time, whether to start configuration items such as primary cache and secondary cache and the like. Time parameter definition: the subsequent annotation parsing steps can be parsed one by one, and the system automatically uses the default cache time on the premise that the custom expiration time is not set. The control parameter isOpen is the statement at the interface, so that the current interface can be flexibly enabled or not enabled to be set by the cache, and later-stage test and maintenance are facilitated. globalsopen is the configuration of the total configuration file in the application, if the configuration is performed, the priority of the globalsopen is the highest, the cache of the interface function using the cache annotation @ DistributeCache in the current application is opened or closed by using the global cache configuration, and the globalsopen can be understood as the global unified configuration. The only key states that the key is important because the cache is a map structure storage, and the key is required to be one when the cache value is searched according to the key.
The following key codes are specifically explained:
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @ interface DistributeCache {// custom Annotation
String key() default "";
int distExpiretime () default 5// default 5 minutes
int localexpiperTime () default 10// local cache default time
TimeUnit unit () default TimeUnit, MINUTES,/time type
bootean open () default true// default open distributed cache
bootean globalsopen () default true// global cache configuration { working on the entire service layer interface }
/*
Local cache prevents distributed cache from being abnormal or from being penetrated, and the failure time is longer than that of distributed cache in principle
*/
borolan isLocalCache () default false// whether to open local cache
String descriptors () default "", cache description
}。
Parameter declaration format: the segmentation is carried out in the shape of 'I' because the segmentation is suitable for the structure in Redis, and the segmentation is carried out by the background program in the shape of 'I', so that the method is very intuitive. The self-defined parameter analysis function makes an explicit agreement on the format, which can be a character string, or an object in a request object, but uses the format of "# { xxx. Examples are: name corresponds to the name attribute in the UserInfo user.
S200 Annotation parsing configuration
After annotation declaration and definition are completed, annotation parsing realizes automatic loading of a cut plane class through the @ component of Spring, declaration of cut plane accumulation is carried out by combining @ declaration, cut-in of an interface function currently containing cache annotation is realized, and a parsing interface in a public module is called to obtain parameters in annotation declaration, namely parameters defined in 1 annotation parameter declaration.
Wherein, the public processing module: the purpose of realizing the module is to start and end, wherein the module mainly realizes the service operation of the whole service function, so that the core public processing interface and the cache service processing logic are all packaged in the module. Such as the implementation of an analysis interface, the implementation of a main interface of a storage logic, the implementation of an interface of a secondary cache operation, and the like. The core link calls an analysis interface to analyze the parameters and then calls a core cache processing interface to respectively search data for the primary and secondary caches according to the unique key during the process of obtaining interface declaration parameters and entering the analysis, if the data in the validity period is found in the primary and secondary caches, the data is directly returned to a client called by the interface, otherwise, the own data search logic of the original interface is executed, and before the result is returned, a storage interface of the primary and secondary caches is called to store the data in the primary and secondary caches and set the validity period of the cache.
S300, first-level cache encapsulation
The internal implementation is adapted to the third-party cache and implemented as a first-layer cache architecture. The method comprises the steps that the API of Redis of the existing distributed cache is well packaged and adapted, then the spring dynamic configuration principle is utilized, application is injected during loading, and other third-party caches need to realize self development of adaptive interfaces provided inside to realize configuration loading and related calling operation of other third-party caches to be exposed to a public processing module for use.
S400, second-level cache encapsulation
The second-level cache realizes local cache encapsulation by using a map key-value pair structure, and the map structure needs to realize the following functional characteristics: 1. thread safety, so locking control must be ensured when the same data is read and written; 2. the high concurrent read-write capability is met, so the map structure needs to implement the corresponding partition function, and the read-write concentration in a single area is reduced; 3. and identifying and automatically clearing data expiration. Because of limited memory resources, the data must be cleared when the capacity is full or the data buffering time expires, and the clearing requires a corresponding condition or algorithm. The method is realized by selecting an LRU algorithm with a wide scene, and realizes that a queue is needed to support a clearing strategy when a container reaches a storage limit, cache data called recently is continuously placed at the head position of the queue through the queue, and data called rarely recently is arranged at the tail part of the queue, namely a strategy that end position elimination can be carried out according to a certain proportion after the container is filled with data. And key in the Map data structure is the only cached identification, the value is a cached object structure body, the structure body comprises specific data information and expiration time data of the current cached object, the validity of the current data is judged every time program access is performed, if the validity is expired, the current data is deleted, and if the validity is expired, the current cached data is directly returned.
S500, visual implementation
Vue is used in the plug-in to implement a front-end visual interface component that presents the currently cached content in the form of a tree list. The first level menu is respectively distributed cache and local cache, the second level menu is the only key of the current cache, and the specific stored value and the valid period content can be displayed by clicking the only key. And querying and clearing in the menu realize finding and clearing Redis and local cache.
In another aspect of the embodiments of the present application, a cache common component is provided, which is obtained by the cache common component implementation method, and the cache common component includes: the system comprises a first-level cache, a second-level cache and a visual component.
The first-level cache is used as a first-level cache framework, is implemented by packaging an API of Redis of the distributed cache, and is injected into an application during loading by utilizing spring dynamic configuration in the first-level cache, so that the first-level cache is adapted to a third-party cache.
The second-level cache realizes the encapsulation of the local cache through a map key value pair structure, and the map structure is provided with partitions; the secondary cache has a data automatic clearing function for performing data automatic clearing when the capacity is full or the data cache time expires, the data automatic clearing function is realized by using an LRU algorithm; the key in the structure of the map is a unique cached identifier, the value of the key is a cached object structure body, and the object structure body comprises specific data information and expiration time data of the current cached object and is used for judging whether the current cached object is due or not during each access;
the visualization component is realized by using vue and is used for displaying the content of the current cache in the form of a tree list, the first-level menu of the visualization component is respectively a distributed cache and a local cache, the second-level menu is the only key of the current cache, and the visualization component can display the specific stored value and the valid period content in response to the click of the only key; the menu of the visual component provides a query and clearing function and is used for searching and clearing Redis and local cache according to input query content.
In another aspect of the embodiments of the present application, an installation method is provided, which is used for installing the cache common component.
Mounting and use of components
1. Introduce a component dependency Package in a pom File (project introduction code example 1 is as follows)
<dependency>
<groupId>com.xnky.soft.utils</groupId>
<artifactId>cache</artifactId >
<version>${cache.version}</version>
</dependency>
2. Opening the cut plane @ EnableAspectJAutoProxy support in the Start class (project reference code example 2)
@EnableAspectJAutoProxy
@SpringBootApplication
public class OprationApplication {
public static void main(String[] args) {
SpringApplication.run(OprationApplication.class, args);
}
}
3. Adding configuration annotation at interface entry needing configuration (interface function entry buffer annotation using code example 1)
@DistributeCache(key="agent:monitor:test:"+"#{user.name}:#{user.id}",isLocalCache = true,distExpireTime = 5,localExpireTime = 10,unit = TimeUnit.MINUTES)
@PostMapping("hello2")
public String hello2(UserInfo user) {
return String.format("Hello %s!", user.getName());
}。
In another aspect of the embodiment of the present application, there is provided a method for operating a cache common component, as shown in fig. 2, an operation flow of the component includes, from top to bottom:
s1: an externally initiated access request is made to the current application layer.
S2: after the application layer receives the request, if the current service interface function has the cache annotation declaration, then the method cut-in of the cache component is executed, the annotation of the head is analyzed before the method is executed, the parameters and the unique key of the current request, the corresponding overdue parameters of the configuration and the like are obtained, and the context parameter object is packaged into a context parameter object, and S3 is obtained to obtain the processing step.
S3: the application layer searches whether the current unique key value exists from the first-level cache, and the current cached object is directly returned in the valid period. Responsible for executing the step S4.
S4: if the first-level cache does not hit, searching is carried out in the second-level local cache device, and if the data is found and the data is in the valid period, the current cached data is directly returned. Otherwise, go to S5.
S5: if the data fails to be hit in the first-level cache device and the second-level cache device or the data fails in the cache, the query processing logic obtained by the original method is executed, after the data is obtained by the original application method logic, the data is stored in the first-level cache and the second-level cache before the data is returned to the client request, and then the data is finally returned to the client request.
In the embodiment of the present application, the local cache has the characteristic of flexible configuration, and may be configured at the annotation entry, or may be configured globally in the applied document yml to be turned on or off, where the global configuration priority is higher than the configuration at the interface entry. The container size may be self-configurable or may be 500 elements using a default configuration. And starting a separate thread in the local cache to perform timed cleaning of container data expiration and execution of a full-capacity LRU elimination strategy.
The embodiment of the application provides the support of the interface visualization tool, and the behavior condition of the local memory can not be detected without additionally connecting a third-party operation and maintenance tool.
The embodiment of the application can be flexibly configured according to different scenes, and only one level or two levels of cache can be configured and started. The annotation uniform configuration standard aids code readability.
The foregoing is merely a preferred embodiment of this invention and is not intended to be exhaustive or to limit the invention to the precise form disclosed. It will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention.

Claims (7)

1. A cache common component implementation method is characterized by comprising the following steps:
s100, comment declaration and parameter declaration format processing:
carrying out self-defined annotation distributed cache @ DistributeCache through an @ interface class of java, defining annotation use at an interface function inlet and explaining default starting of cache service, and defining configuration parameters of current annotation in the @ interface class of java, wherein the configuration parameters comprise time parameters, control parameters and unique keys;
the time parameter comprises self-defined expiration time, and if the user does not configure the self-defined event, the default expiration time is started; the self-defined expiration time or the default expiration time is used for defining the expiration time in the first-level cache and the expiration time in the second-level cache;
the control parameters comprise switch parameters and global configuration parameters for controlling the interface level, and the switch parameters for controlling the interface level comprise declared open functions at the interface and are used for enabling or not enabling the current interface to be in cache setting; global control parameters configured in an application configuration file, including globalsopen parameters, for starting or closing a cache using an interface function of a cache annotation distributed cache @ DistributeCache in a current application as a highest priority global cache configuration;
the unique key is used for searching a corresponding cache value in the map structure storage of the cache;
the parameter declaration format is divided in 'for adapting to the structure in Redis, and the background program is also divided in' for;
s200, annotation analysis configuration:
the automatic loading of the tangent classes is realized through the @ component of Spring, the tangent classes are declared by combining the @ Aspect, so that the cut-in of the interface function currently containing the cache annotation is realized, and a pre-configured analysis interface in a public processing module is called to obtain configuration parameters in the annotation declaration;
the public processing module is used for obtaining interface declaration parameters, entering an analysis process, obtaining and analyzing defined annotation parameters of an interface in the analysis process, respectively searching data in the primary cache and the secondary cache according to a unique key, if data belonging to the validity period is found in the primary cache and the secondary cache, directly returning the data to a client called by the interface, otherwise, executing the own data searching logic of the original interface, and before returning a result, calling a storage interface of the primary cache or the secondary cache to store the data into the primary cache or the secondary cache and setting the validity period of the cache; the time of validity is judged according to the self-defined expiration time;
s300, first-level cache encapsulation: packaging an API of Redis of the distributed cache, and injecting application during loading by using spring dynamic configuration to realize adaptation of a third-party cache;
s400, secondary cache encapsulation: using a structure of map key value pairs to realize encapsulation of a local cache to obtain a map structure, partitioning the structure of the map in the encapsulation, and introducing an automatic data clearing mechanism, wherein the automatic data clearing mechanism is realized by using an LRU algorithm and is used for automatically clearing data when the capacity is full or the data cache time expires;
s500, visual realization: vue is used to implement a front-end visualization interface component that presents currently cached content in the form of a tree list.
2. The cache common component implementation method of claim 1, wherein the automatic clearing of data upon capacity fullness or expiration of data caching time is implemented using an LRU algorithm, comprising the steps of:
performing data caching in the container in a queue form;
sequencing the data cached in the queue according to the calling time, and placing the cached data which is closest to the current time from the calling time at the head position of the queue;
when the capacity is full, automatically clearing the cache data from the tail part of the queue according to a preset proportion, wherein the proportion refers to the proportion of the cache data needing to be cleared in the whole queue;
and directly deleting the cached data with the expired data caching time.
3. A cache common component obtained by the cache common component implementation method according to any one of claims 1 to 2, the cache common component comprising:
the first-level cache is used as a first-layer cache framework, is realized by packaging an API (application program interface) of Redis of the distributed cache, and is injected into an application during loading by utilizing spring dynamic configuration in the first-level cache so as to realize the adaptation of the first-level cache to a third-party cache;
the second-level cache is used for realizing the encapsulation of the local cache through a map key value pair structure, and the map structure is provided with partitions; the secondary cache has a data automatic clearing function for performing data automatic clearing when the capacity is full or the data cache time expires, the data automatic clearing function is realized by using an LRU algorithm; the key in the structure of the map is a unique cached identifier, the value of the key is a cached object structure body, and the object structure body comprises specific data information and expiration time data of the current cached object and is used for judging whether the current cached object is due or not during each access;
the visualization component is realized by using vue and is used for displaying the content of the current cache in a form of a tree list, the first-level menu of the visualization component is a distributed cache and a local cache respectively, the second-level menu is the only key of the current cache, and the visualization component can display the specific stored value and the valid period content in response to clicking on the only key; the menu of the visual component provides a query and clearing function and is used for searching and clearing Redis and local cache according to input query content.
4. A method for installing a cache common component according to claim 3, comprising the steps of:
introducing a component dependency package in the pom file;
opening a cut surface @ EnableAspectJAutoProxy support in a starting class;
and adding configuration notes at the entrance of the interface needing configuration.
5. A method of operating a cache common component according to claim 3, comprising the steps of:
receiving an external request reaching an application layer;
analyzing a specific response service cache annotation called by a current external request, acquiring a query parameter received by a current response service and a unique key parameter, a cache expiration parameter, whether to start a first-level or second-level cache parameter defined in a function annotation, and packaging into a cache object;
and searching whether the unique key of the current request exists in the first-level cache:
if the unique key is found in the first-level cache and the current cache object is in the valid period, returning the current cache object;
if the unique key is not found in the first-level cache, searching in a second-level cache:
if the unique key is found in the secondary cache and the current cache object is in the valid period, returning the current cache object;
if the unique key is not found in the second-level cache or the unique key is found in the first-level cache/the second-level cache but the current cache object is not in the validity period, executing database query processing logic of the response service, after the database query logic of the response service is executed to obtain data, firstly storing the data into the first-level cache or the second-level cache, and finally returning the data to the client for requesting.
6. An electronic device, comprising: at least one processor and memory; wherein the memory stores computer-executable instructions; computer-executable instructions stored in the memory are executed in the at least one processor, so that the at least one processor executes the cache common component implementation method according to claim 1 or 2, or executes the installation method according to claim 4, or executes the operation method according to claim 5.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, controls an apparatus on which the storage medium is located to perform the cache common component implementation method according to claim 1 or 2, or to perform the installation method according to claim 4, or to perform the execution method according to claim 5.
CN202110953485.6A 2021-08-19 2021-08-19 Cache public assembly and implementation, installation and operation methods thereof Active CN113722363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110953485.6A CN113722363B (en) 2021-08-19 2021-08-19 Cache public assembly and implementation, installation and operation methods thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110953485.6A CN113722363B (en) 2021-08-19 2021-08-19 Cache public assembly and implementation, installation and operation methods thereof

Publications (2)

Publication Number Publication Date
CN113722363A true CN113722363A (en) 2021-11-30
CN113722363B CN113722363B (en) 2023-09-12

Family

ID=78676810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110953485.6A Active CN113722363B (en) 2021-08-19 2021-08-19 Cache public assembly and implementation, installation and operation methods thereof

Country Status (1)

Country Link
CN (1) CN113722363B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117851456A (en) * 2024-01-05 2024-04-09 迪爱斯信息技术股份有限公司 Method, system and server for sharing data in cluster

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080301135A1 (en) * 2007-05-29 2008-12-04 Bea Systems, Inc. Event processing query language using pattern matching
CN108540556A (en) * 2018-04-13 2018-09-14 南京新贝金服科技有限公司 A kind of fining Session clusters shared system and method based on cache
CN110413543A (en) * 2019-06-17 2019-11-05 中国科学院信息工程研究所 A kind of API gateway guarantee service high availability method and system based on fusing and L2 cache
CN111596922A (en) * 2020-05-15 2020-08-28 山东汇贸电子口岸有限公司 Method for realizing custom cache annotation based on redis
CN112115074A (en) * 2020-09-02 2020-12-22 紫光云(南京)数字技术有限公司 Method for realizing data resident memory by using automatic loading mechanism
CN112507067A (en) * 2020-11-30 2021-03-16 厦门海西医药交易中心有限公司 Cache plug-in annotating device and annotation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080301135A1 (en) * 2007-05-29 2008-12-04 Bea Systems, Inc. Event processing query language using pattern matching
CN108540556A (en) * 2018-04-13 2018-09-14 南京新贝金服科技有限公司 A kind of fining Session clusters shared system and method based on cache
CN110413543A (en) * 2019-06-17 2019-11-05 中国科学院信息工程研究所 A kind of API gateway guarantee service high availability method and system based on fusing and L2 cache
CN111596922A (en) * 2020-05-15 2020-08-28 山东汇贸电子口岸有限公司 Method for realizing custom cache annotation based on redis
CN112115074A (en) * 2020-09-02 2020-12-22 紫光云(南京)数字技术有限公司 Method for realizing data resident memory by using automatic loading mechanism
CN112507067A (en) * 2020-11-30 2021-03-16 厦门海西医药交易中心有限公司 Cache plug-in annotating device and annotation method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HONGYU.G: "基于Java实现本地缓存,缓存过期删除和LRU缓存淘汰", 《HTTPS://BLOG.CSDN.NET/KL_DREAMING/ARTICLE/DETAILS/108458610》, pages 1 - 4 *
KUN MA ET AL.: "Segment access-aware dynamic semantic cache in cloud computing environment", 《JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING》, pages 42 - 51 *
关海生: "数据缓存实现快速数据访问的设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 87 *
郑松: "基于内存的分布式列式数据库缓存管理系统设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 1090 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117851456A (en) * 2024-01-05 2024-04-09 迪爱斯信息技术股份有限公司 Method, system and server for sharing data in cluster

Also Published As

Publication number Publication date
CN113722363B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
US7165101B2 (en) Transparent optimization of network traffic in distributed systems
US8667471B2 (en) Method and system for customizing profiling sessions
US7831771B2 (en) System and method for managing cachable entities
US8336033B2 (en) Method and system for generating a hierarchical tree representing stack traces
US8601469B2 (en) Method and system for customizing allocation statistics
US7904493B2 (en) Method and system for object age detection in garbage collection heaps
US8020149B2 (en) System and method for mitigating repeated crashes of an application resulting from supplemental code
US8280908B2 (en) Merging file system directories
US8316120B2 (en) Applicability detection using third party target state
US8156507B2 (en) User mode file system serialization and reliability
US9329969B2 (en) Method and system of associating a runtime event with a component
WO2006128062A2 (en) Database caching of queries and stored procedures using database provided facilities for dependency analysis and detected database updates for invalidation
CN106354851A (en) Data-caching method and device
CN110109958A (en) Method for caching and processing, device, equipment and computer readable storage medium
CN110096334A (en) Method for caching and processing, device, equipment and computer readable storage medium
CN113722363A (en) Cache public component and implementation, installation and operation method thereof
US8272001B2 (en) Management of resources based on association properties of association objects
US8312062B1 (en) Automatic resource leak detection
CN109165078A (en) A kind of virtual distributed server and its access method
CN111240728A (en) Application program updating method, device, equipment and storage medium
US7613710B2 (en) Suspending a result set and continuing from a suspended result set
CN113031964B (en) Big data application management method, device, equipment and storage medium
US20240160412A1 (en) Non-intrusive build time injection tool for accelerating launching of cloud applications
US11714662B2 (en) Technique for reporting nested linking among applications in mainframe computing environment
CN117150984A (en) Cache checking method, checking system and cache checker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant