CN107977165B - Data cache optimization method and device and computer equipment - Google Patents

Data cache optimization method and device and computer equipment Download PDF

Info

Publication number
CN107977165B
CN107977165B CN201711174686.6A CN201711174686A CN107977165B CN 107977165 B CN107977165 B CN 107977165B CN 201711174686 A CN201711174686 A CN 201711174686A CN 107977165 B CN107977165 B CN 107977165B
Authority
CN
China
Prior art keywords
cache
data
level
caching
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711174686.6A
Other languages
Chinese (zh)
Other versions
CN107977165A (en
Inventor
纪文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yonyou Fintech Information Technology Co ltd
Original Assignee
Yonyou Fintech Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yonyou Fintech Information Technology Co ltd filed Critical Yonyou Fintech Information Technology Co ltd
Priority to CN201711174686.6A priority Critical patent/CN107977165B/en
Publication of CN107977165A publication Critical patent/CN107977165A/en
Application granted granted Critical
Publication of CN107977165B publication Critical patent/CN107977165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Abstract

The invention provides a data cache optimization method, a data cache optimization device and computer equipment. A data cache optimization method comprises the following steps: integrating the first-level cache and the second-level cache to obtain cache service, and externally issuing the cache service in an interface mode; creating a cache configuration file for the cache service; and configuring the cache data according to the cache configuration file, and caching the configured cache data. According to the technical scheme of the invention, data caching of various scenes can be realized only by calling through a uniform API (application program interface) and simply setting the cached data according to the cache configuration file.

Description

Data cache optimization method and device and computer equipment
Technical Field
The invention relates to the technical field of computers, in particular to a data cache optimization method, a data cache optimization device, computer equipment and a computer readable storage medium.
Background
At present, the cache operation is mainly in two modes:
first, a map object carried by a program, such as hashmap of jdk (Java definition kit, jdk), is used;
secondly, cache software such as redis, memcached and the like is adopted.
The first mode is easy to cause memory overflow if the data size needing to be cached is large or a large object is stored, and the caching software of the second mode is usually effective only for one scene, and some data size and other updating synchronization strategies are emphasized.
Therefore, how to provide a cache mechanism that comprehensively considers the cache requirements of different scenarios and has simple, flexible, and efficient configuration becomes a technical problem to be solved urgently at present.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art or the related art.
To this end, an aspect of the present invention is to provide a data cache optimization method.
Another aspect of the present invention is to provide a data cache optimization method.
Yet another aspect of the invention is directed to a computer device.
Yet another aspect of the present invention is to provide a computer-readable storage medium.
In view of the above, an aspect of the present invention provides a data cache optimization method, including: integrating the first-level cache and the second-level cache to obtain cache service, and externally issuing the cache service in an interface mode; creating a cache configuration file for the cache service; and configuring the cache data according to the cache configuration file, and caching the configured cache data.
According to the data cache optimization method, two caches are integrated through a coding mode, preferably, the first-level cache can be set as a default cache, a unified API (application program interface) is abstracted from the two different caches, and various scenes such as the support of an OLAP (On-Line Analytical Processing) scene and an OLTP (On-Line Transaction Processing) scene are realized according to the respective characteristics of the two caches, so that strong support is provided for caching data, improving query efficiency, cluster login and the like; meanwhile, a cache configuration file is created, and the created configuration file supports various forms such as xml, txt, interface-oriented programming and the like, so that the method is very flexible; therefore, for the application, data caching of various scenes can be realized only by calling through a uniform API (application programming interface) and simply setting the cached data according to the caching configuration file.
In addition, the data cache optimization method according to the present invention may further have the following additional technical features:
in the foregoing technical solution, preferably, the data cache optimization method further includes: displaying a connection attribute configuration interface of the second-level cache, and receiving an input setting command of the connection attribute; and determining the connection attribute of the second-level cache according to the setting command so as to store the cache data into or take the cache data out of the corresponding second-level cache.
In the technical scheme, the first-level cache is used for caching large data volume or large object data, generally is a default cache and does not need to be connected with attribute configuration; for the cache data in the OLTP scene, a user can set the connection attributes of the secondary cache (e.g., Redis) through a configuration interface provided by the system, such as a Redis host, a port, a password, a maximum connection number, a maximum idle connection number, a minimum idle connection number, a maximum wait millisecond number, and the like, so that the system can automatically acquire specific information of the Redis cache, thereby automatically storing and reading the cache data, and the configuration mode is simple and flexible.
In any of the above technical solutions, preferably, the first-level cache is used for caching large object data; the second level cache is used for caching at least any one or a combination of the following: intermediate calculation results, application configuration, Session data.
In this technical solution, the first-level cache is used to cache large object data (i.e. OLAP scene), and preferably, the first-level cache may be set as a default cache. Caching common information such as application configuration and the like by utilizing a second-level cache; intermediate calculation results, for example, in some application scenarios, some temporary results need to be temporarily stored and put into the second-level cache, which may improve the efficiency of calculation; and Session data, such as user login information, etc., which are put into the second-level cache, so that the user cluster login can be realized, and the information of the login user can be quickly acquired at any time. According to the technical scheme, the cache requirements of different application scenes are met, different configurations are only needed to be carried out according to the different application scenes, the method is very convenient, and strong support is provided for caching data, improving the query efficiency, cluster login and the like.
In any of the above technical solutions, preferably, the first-level cache is ehcache; the secondary cache is redis.
In the technical scheme, two caches (ehcache and redis) are integrated, the ehcache is used for caching a large object (namely an OLAP scene), and the redis is used for caching some configurations, intermediate calculation results, Session level result sets and the like (namely an OLTP scene), so that the caching requirements of different scenes are met, and the caching mechanism can be easily introduced only by simple configuration.
In any of the above technical solutions, preferably, the cache configuration file includes: the method comprises the steps of configuring the name of cache data, idle time before invalidation, unloading strategy, whether the cache is a big data cache, whether cluster synchronization is synchronous or not and the maximum amount of caches in a memory of a cache service.
In the technical scheme, the cache configuration file comprises: configuring the name of cache data, which is generally the class name of a cache object; configuring the idle time before the cache fails, wherein the time beyond which the cache is not used can be recycled; configuring the unloading strategy of the cache, supporting FIFO, LRU, LFU (optional), and generally defaulting the LFU; whether the large data cache is configured or not and whether the cluster synchronization is synchronized or not are configured; and configuring the maximum amount that can be stored in the memory cache. Therefore, the cache service can be simply configured based on different cache objects, and the cache of data is realized.
In another aspect of the present invention, a data caching optimization apparatus is provided, including: the cache service unit is used for integrating the first-level cache and the second-level cache to obtain cache service and externally releasing the cache service in an interface mode; the first configuration unit is used for creating a cache configuration file for the cache service; and the processing unit is used for configuring the cache data according to the cache configuration file and caching the configured cache data.
According to the data cache optimization device, two caches are integrated in an encoding mode, preferably, the first-level cache can be set as a default cache, a unified API (application program interface) is abstracted from the two different caches, support of multiple scenes (such as an OLAP (on-line analytical processing) scene and an OLTP (on-line analytical processing) scene is realized according to respective characteristics of the two caches, and strong support is provided for caching data, improving query efficiency, cluster login and the like; meanwhile, a cache configuration file is created, and the created configuration file supports various forms such as xml, txt, interface-oriented programming and the like, so that the method is very flexible; therefore, for the application, data caching of various scenes can be realized only by calling through a uniform API (application programming interface) and simply setting the cached data according to the caching configuration file.
In the foregoing technical solution, preferably, the data cache optimization device further includes: the second configuration unit is used for displaying a connection attribute configuration interface of the second-level cache and receiving an input setting command of the connection attribute; and the processing unit is also used for determining the connection attribute of the second-level cache according to the setting command so as to store the cache data into or take the cache data out of the corresponding second-level cache.
In the technical scheme, the first-level cache is used for caching large data volume or large object data, generally is a default cache and does not need to be connected with attribute configuration; for the cache data in the OLTP scene, a user can set the connection attributes of the secondary cache (e.g., Redis) through a configuration interface provided by the system, such as a Redis host, a port, a password, a maximum connection number, a maximum idle connection number, a minimum idle connection number, a maximum wait millisecond number, and the like, so that the system can automatically acquire specific information of the Redis cache, thereby automatically storing and reading the cache data, and the configuration mode is simple and flexible.
In any of the above technical solutions, preferably, the first-level cache is used for caching large object data; the second level cache is used for caching at least any one or a combination of the following: intermediate calculation results, application configuration, Session data.
In this technical solution, the first-level cache is used to cache large object data (i.e. OLAP scene), and preferably, the first-level cache may be set as a default cache. Caching common information such as application configuration and the like by utilizing a second-level cache; intermediate calculation results, for example, in some application scenarios, some temporary results need to be temporarily stored and put into the second-level cache, which may improve the efficiency of calculation; and Session data, such as user login information, etc., which are put into the second-level cache, so that the user cluster login can be realized, and the information of the login user can be quickly acquired at any time. According to the technical scheme, the cache requirements of different application scenes are met, different configurations are only needed to be carried out according to the different application scenes, the method is very convenient, and strong support is provided for caching data, improving the query efficiency, cluster login and the like.
In any of the above technical solutions, preferably, the first-level cache is ehcache; the secondary cache is redis.
In the technical scheme, two caches (ehcache and redis) are integrated, the ehcache is used for caching a large object (namely an OLAP scene), and the redis is used for caching some configurations, intermediate calculation results, Session level result sets and the like (namely an OLTP scene), so that the caching requirements of different scenes are met, and the caching mechanism can be easily introduced only by simple configuration.
In any of the above technical solutions, preferably, the cache configuration file includes: the method comprises the steps of configuring the name of cache data, idle time before invalidation, unloading strategy, whether the cache is a big data cache, whether cluster synchronization is synchronous or not and the maximum amount of caches in a memory of a cache service.
In the technical scheme, the cache configuration file comprises: configuring the name of cache data, which is generally the class name of a cache object; configuring the idle time before the cache fails, wherein the time beyond which the cache is not used can be recycled; configuring the unloading strategy of the cache, supporting FIFO, LRU, LFU (optional), and generally defaulting the LFU; whether the large data cache is configured or not and whether the cluster synchronization is synchronized or not are configured; and configuring the maximum amount that can be stored in the memory cache. Therefore, the cache service can be simply configured based on different cache objects, and the cache of data is realized.
In a further aspect of the invention, a computer device is proposed, which comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor being adapted to perform the steps of the method according to any of the above-mentioned aspects.
According to the computer device of the present invention, the processor included therein is configured to execute the steps of the data cache optimization method in any of the above technical solutions, so that the computer device can achieve all the beneficial effects of the data cache optimization method, and details are not described herein again.
In a further aspect of the invention, a computer-readable storage medium is proposed, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to any one of the preceding claims.
According to the computer-readable storage medium of the present invention, when being executed by a processor, the computer program stored thereon implements the steps of the data cache optimization method in any of the above technical solutions, so that the computer-readable storage medium can implement all the beneficial effects of the data cache optimization method, and details are not described herein again.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 shows a flow diagram of a data cache optimization method according to one embodiment of the invention;
FIG. 2 is a flow diagram illustrating a method of data cache optimization according to another embodiment of the invention;
FIG. 3 shows a schematic block diagram of a data cache optimization apparatus according to an embodiment of the present invention;
FIG. 4 shows a schematic block diagram of a data cache optimization apparatus according to another embodiment of the present invention;
FIG. 5 is a schematic diagram of a data cache optimization apparatus according to an embodiment of the present invention;
FIG. 6 illustrates a schematic diagram of a connection property configuration interface for Redis, in accordance with a specific embodiment of the present invention;
FIG. 7 shows a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Fig. 1 is a schematic flow chart of a data cache optimization method according to an embodiment of the present invention. The data cache optimization method comprises the following steps:
step 102, integrating the first-level cache and the second-level cache to obtain cache service, and externally issuing the cache service in an interface mode;
104, creating a cache configuration file for the cache service;
and 106, configuring the cache data according to the cache configuration file, and caching the configured cache data.
According to the data cache optimization method provided by the invention, two caches are integrated in a coding mode, preferably, the first-level cache can be set as a default cache, a unified API (application program interface) is abstracted from two different caches, the support of various scenes (such as an OLAP (on-line analytical processing) scene and an OLTP (on-line analytical processing) scene is realized according to respective characteristics of the two caches, and strong support is provided for caching data, improving query efficiency, cluster login and the like; meanwhile, a cache configuration file is created, and the created configuration file supports various forms such as xml, txt, interface-oriented programming and the like, so that the method is very flexible; therefore, for the application, data caching of various scenes can be realized only by calling through a uniform API (application programming interface) and simply setting the cached data according to the caching configuration file.
Fig. 2 is a schematic flow chart of a data cache optimization method according to another embodiment of the present invention. The data cache optimization method comprises the following steps:
step 202, integrating the first-level cache and the second-level cache to obtain cache service, and externally issuing the cache service in an interface mode;
step 204, creating a cache configuration file for the cache service to configure cache data;
step 206, displaying a connection attribute configuration interface of the second-level cache, and receiving an input setting command of the connection attribute;
step 208, determining the connection attribute of the second-level cache according to the setting command;
and step 210, configuring the cache data according to the cache configuration file, and storing the configured cache data into a corresponding second-level cache or taking the configured cache data out of the second-level cache.
In this embodiment, the first-level cache is used for caching data with a large data volume or large object data, and is generally a default cache without configuration of connection attributes; for the cache data in the OLTP scene, a user can set the connection attributes of the secondary cache (e.g., Redis) through a configuration interface provided by the system, such as a Redis host, a port, a password, a maximum connection number, a maximum idle connection number, a minimum idle connection number, a maximum wait millisecond number, and the like, so that the system can automatically acquire specific information of the Redis cache, thereby automatically storing and reading the cache data, and the configuration mode is simple and flexible.
In any of the above embodiments, preferably, the first-level cache is configured to cache large object data; the second level cache is used for caching at least any one or a combination of the following: intermediate calculation results, application configuration, Session data.
In this embodiment, the first level cache is utilized to cache large object data (i.e., OLAP scene), and preferably, the first level cache may be set as a default cache. Caching common information such as application configuration and the like by utilizing a second-level cache; intermediate calculation results, for example, in some application scenarios, some temporary results need to be temporarily stored and put into the second-level cache, which may improve the efficiency of calculation; and Session data, such as user login information, etc., which are put into the second-level cache, so that the user cluster login can be realized, and the information of the login user can be quickly acquired at any time. According to the technical scheme, the cache requirements of different application scenes are met, different configurations are only needed to be carried out according to the different application scenes, the method is very convenient, and strong support is provided for caching data, improving the query efficiency, cluster login and the like.
In any of the above embodiments, preferably, the first-level cache is ehcache; the secondary cache is redis.
In this embodiment, two caches (ehcache and redis) are integrated, ehcache is used to cache a large object (i.e., an OLAP scene), and redis is used to cache some configurations, intermediate calculation results, Session level result sets, and the like (i.e., an OLTP scene), so that the caching requirements of different scenes are met, and a caching mechanism can be easily introduced only by simple configuration.
In any of the above embodiments, preferably, the cache configuration file includes: the method comprises the steps of configuring the name of cache data, idle time before invalidation, unloading strategy, whether the cache is a big data cache, whether cluster synchronization is synchronous or not and the maximum amount of caches in a memory of a cache service.
In this embodiment, the caching profile includes: configuring the name of cache data, which is generally the class name of a cache object; configuring the idle time before the cache fails, wherein the time beyond which the cache is not used can be recycled; configuring the unloading strategy of the cache, supporting FIFO, LRU, LFU (optional), and generally defaulting the LFU; whether the large data cache is configured or not and whether the cluster synchronization is synchronized or not are configured; and configuring the maximum amount that can be stored in the memory cache. Therefore, the cache service can be simply configured based on different cache objects, and the cache of data is realized.
FIG. 3 is a schematic block diagram of a data cache optimization apparatus according to an embodiment of the present invention. The data cache optimization apparatus 300 includes:
the cache service unit 302 is configured to integrate the first-level cache and the second-level cache to obtain a cache service, and issue the cache service to the outside in an interface manner;
a first configuration unit 304, configured to create a cache configuration file for the cache service;
the processing unit 306 is configured to configure the cache data according to the cache configuration file, and cache the configured cache data.
The data cache optimization device 300 provided by the invention integrates two caches in a coding mode, preferably, the first-level cache can be set as a default cache, a unified API (application program interface) is abstracted from the two different caches, and the support of various scenes (such as an OLAP (on-line analytical processing) scene and an OLTP (on-line analytical processing) scene is realized according to the respective characteristics of the two caches, so that strong support is provided for caching data, improving the query efficiency, cluster login and the like; meanwhile, a cache configuration file is created, and the created configuration file supports various forms such as xml, txt, interface-oriented programming and the like, so that the method is very flexible; therefore, for the application, data caching of various scenes can be realized only by calling through a uniform API (application programming interface) and simply setting the cached data according to the caching configuration file.
Fig. 4 is a schematic block diagram of a data cache optimization device according to another embodiment of the present invention. The data cache optimization apparatus 400 includes:
a cache service unit 402, configured to integrate the first-level cache and the second-level cache to obtain a cache service, and issue the cache service to the outside in an interface manner;
a first configuration unit 404, configured to create a cache configuration file for the cache service;
the processing unit 406 is configured to configure the cache data according to the cache configuration file, and cache the configured cache data;
the data cache optimization apparatus 400 further includes:
a second configuration unit 408, configured to display a connection attribute configuration interface of the second-level cache, and receive an input setting command of the connection attribute;
the processing unit 406 is further configured to determine a connection attribute of the second level cache according to the setting command, so as to store or retrieve the cache data into or from the corresponding second level cache.
In this embodiment, the first-level cache is used for caching data with a large data volume or large object data, and is generally a default cache without configuration of connection attributes; for the cache data in the OLTP scene, a user can set the connection attributes of the secondary cache (e.g., Redis) through a configuration interface provided by the system, such as a Redis host, a port, a password, a maximum connection number, a maximum idle connection number, a minimum idle connection number, a maximum wait millisecond number, and the like, so that the system can automatically acquire specific information of the Redis cache, thereby automatically storing and reading the cache data, and the configuration mode is simple and flexible.
In any of the above embodiments, preferably, the first-level cache is configured to cache large object data; the second level cache is used for caching at least any one or a combination of the following: intermediate calculation results, application configuration, Session data.
In this embodiment, the first level cache is utilized to cache large object data (i.e., OLAP scene), and preferably, the first level cache may be set as a default cache. Caching common information such as application configuration and the like by utilizing a second-level cache; intermediate calculation results, for example, in some application scenarios, some temporary results need to be temporarily stored and put into the second-level cache, which may improve the efficiency of calculation; and Session data, such as user login information, etc., which are put into the second-level cache, so that the user cluster login can be realized, and the information of the login user can be quickly acquired at any time. According to the technical scheme, the cache requirements of different application scenes are met, different configurations are only needed to be carried out according to the different application scenes, the method is very convenient, and strong support is provided for caching data, improving the query efficiency, cluster login and the like.
In any of the above embodiments, preferably, the first-level cache is ehcache; the secondary cache is redis.
In this embodiment, two caches (ehcache and redis) are integrated, ehcache is used to cache a large object (i.e., an OLAP scene), and redis is used to cache some configurations, intermediate calculation results, Session level result sets, and the like (i.e., an OLTP scene), so that the caching requirements of different scenes are met, and a caching mechanism can be easily introduced only by simple configuration.
In any of the above embodiments, preferably, the cache configuration file includes: the method comprises the steps of configuring the name of cache data, idle time before invalidation, unloading strategy, whether the cache is a big data cache, whether cluster synchronization is synchronous or not and the maximum amount of caches in a memory of a cache service.
In this embodiment, the caching profile includes: configuring the name of cache data, which is generally the class name of a cache object; configuring the idle time before the cache fails, wherein the time beyond which the cache is not used can be recycled; configuring the unloading strategy of the cache, supporting FIFO, LRU, LFU (optional), and generally defaulting the LFU; whether the large data cache is configured or not and whether the cluster synchronization is synchronized or not are configured; and configuring the maximum amount that can be stored in the memory cache. Therefore, the cache service can be simply configured based on different cache objects, and the cache of data is realized.
The specific embodiment provides a caching mechanism suitable for different scenes. By integrating two caches (ehcache, redis), ehcache is used for caching large objects (OLAP scene), and redis is used for caching some configurations, intermediate calculation results, Session level result sets and the like (OLTP scene). The specific design principle is shown in fig. 5. Wherein the content of the first and second substances,
POJO: the object type is stored in the DefaultCache, and any object realizing serialization, such as analyzed object data in a data analysis scene, can be cached through code packaging.
The intermediate calculation result is: and storing the temporary results into the Redis cache, wherein in some application scenes, some temporary results need to be temporarily stored and can be placed into the cache to improve the calculation efficiency.
File/configuration: storing the information into a Redis cache, and storing common information such as application configuration and the like into a cache.
Session level result set: and (4) related information of the Session level is stored in the Rediscache, such as user login information, so that cluster login of the user is realized, and information of a login user can be quickly acquired at any time.
Caching service: the cache service is externally issued in an interface mode, the IRedisCache is issued as the cache service, the ICache is used as the cache interface, and the DefaultCache (Ehcache) and the Redis Cache are used as the specific implementation of the cache.
The cache creation mode supports xml, which is the configuration of a data cache, and includes, for example: the name of the cache; the idle time before the cache fails, and the time beyond which the cache is not used can be recycled; the maximum number of caches in the memory is optional and is default to 5000; the cache unloading strategy supports FIFO, LRU and LFU, and the LRU is selected and defaulted; whether it is a big data cache; whether the synchronization cluster is synchronized, default false.
Specifically, as shown in fig. 6, through the connection attribute configuration interface, specific information of the Redis cache set by the user, such as a Redis host, a port, a password, a maximum connection number, a maximum idle connection number, a minimum idle connection number, a maximum wait millisecond number, and the like, may be acquired, so as to implement storage and reading of cache data, and the configuration mode is simple and flexible.
In the embodiment, an efficient cache mechanism which comprehensively considers cache requirements of different scenes is provided based on ehcache and redis. The two caches are integrated in an encoding mode, an ehcache is used for caching a large object (OLAP scene), a redis is used for caching some configurations, intermediate calculation results, Session level result sets and the like (OLTP scene), a configuration interface is provided, and different configurations are performed according to different application scenes, so that the cache mechanism can be introduced. Based on the cache mechanism, the cache mechanism can be introduced to products such as DI (data integration), high-level analysis, BQ and the like, so that strong support is provided for caching data, improving query efficiency, cluster login and the like.
FIG. 7 is a schematic diagram of a computer device according to one embodiment of the invention. The computer device 1 comprises a memory 12, a processor 14 and a computer program stored on the memory 12 and executable on the processor 14, wherein the processor is configured to perform the steps of the method according to any of the above embodiments.
In the computer device 1 provided by the present invention, the processor 14 included in the computer device is configured to execute the steps of the data cache optimization method in any of the above embodiments, so that the computer device can achieve all the beneficial effects of the data cache optimization method, and details are not described herein again.
In a further aspect of the invention, a computer-readable storage medium is proposed, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the method according to any one of the preceding embodiments.
The computer program stored on the computer-readable storage medium provided by the present invention, when executed by a processor, implements the steps of the data cache optimization method in any of the above embodiments, so that the computer-readable storage medium can implement all the beneficial effects of the data cache optimization method, and details are not described herein.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A data cache optimization method is characterized by comprising the following steps:
integrating the first-level cache and the second-level cache to obtain cache service, and externally releasing the cache service in an interface mode;
creating a cache configuration file for the cache service;
configuring cache data according to the cache configuration file, and caching the configured cache data;
displaying a connection attribute configuration interface of the second-level cache, and receiving an input setting command of the connection attribute;
and determining the connection attribute of the second-level cache according to the setting command so as to store the cache data into or take the cache data out of the corresponding second-level cache.
2. The data cache optimization method of claim 1,
the first-level cache is used for caching large object data;
the second level cache is used for caching at least any one of the following or the combination of the following: intermediate calculation results, application configuration, Session data.
3. The data cache optimization method of claim 2,
the first-level cache is ehcache;
the secondary cache is redis.
4. The data cache optimization method of any one of claims 1 to 3,
the cache configuration file comprises: and configuring the name of the cache data, the idle time before invalidation, an unloading strategy, whether the cache is a big data cache, whether the cluster is synchronous or not, and the maximum amount of the cache in the memory of the cache service.
5. A data cache optimization apparatus, comprising:
the cache service unit is used for integrating the first-level cache and the second-level cache to obtain cache service and externally releasing the cache service in an interface mode;
the first configuration unit is used for creating a cache configuration file for the cache service;
the processing unit is used for configuring cache data according to the cache configuration file and caching the configured cache data;
the second configuration unit is used for displaying a connection attribute configuration interface of the second-level cache and receiving an input setting command of the connection attribute;
the processing unit is further configured to determine a connection attribute of the second level cache according to the setting command, so as to store or retrieve the cache data to or from the corresponding second level cache.
6. The data cache optimization device of claim 5,
the first-level cache is used for caching large object data;
the second level cache is used for caching at least any one of the following or the combination of the following: intermediate calculation results, application configuration, Session data.
7. The data cache optimization device of claim 6,
the first-level cache is ehcache;
the secondary cache is redis.
8. The data cache optimization device of any one of claims 5 to 7,
the cache configuration file comprises: and configuring the name of the cache data, the idle time before invalidation, an unloading strategy, whether the cache is a big data cache, whether the cluster is synchronous or not, and the maximum amount of the cache in the memory of the cache service.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor is adapted to perform the steps of the method according to any of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201711174686.6A 2017-11-22 2017-11-22 Data cache optimization method and device and computer equipment Active CN107977165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711174686.6A CN107977165B (en) 2017-11-22 2017-11-22 Data cache optimization method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711174686.6A CN107977165B (en) 2017-11-22 2017-11-22 Data cache optimization method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN107977165A CN107977165A (en) 2018-05-01
CN107977165B true CN107977165B (en) 2021-01-08

Family

ID=62011065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711174686.6A Active CN107977165B (en) 2017-11-22 2017-11-22 Data cache optimization method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN107977165B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271139A (en) * 2018-09-11 2019-01-25 北京北信源软件股份有限公司 A kind of method of standardization management and device based on caching middleware
CN110825705A (en) * 2019-11-22 2020-02-21 广东浪潮大数据研究有限公司 Data set caching method and related device
CN112948336B (en) * 2021-03-30 2023-01-03 联想凌拓科技有限公司 Data acceleration method, cache unit, electronic device and storage medium
CN113205666B (en) * 2021-05-06 2022-06-17 广东鹰视能效科技有限公司 Early warning method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077125A (en) * 2012-12-13 2013-05-01 北京锐安科技有限公司 Self-adaption self-organizing tower type caching method for efficiently utilizing storage space
CN104519088A (en) * 2013-09-27 2015-04-15 方正宽带网络服务股份有限公司 Buffer memory system realization method and buffer memory system
CN105049530A (en) * 2015-08-24 2015-11-11 用友网络科技股份有限公司 Adaption device and method for plurality of distributed cache systems
CN106021414A (en) * 2016-05-13 2016-10-12 中国建设银行股份有限公司 Method and system for accessing multilevel cache parameter information
CN106886371A (en) * 2017-02-15 2017-06-23 中国保险信息技术管理有限责任公司 caching data processing method and device
CN107102896A (en) * 2016-02-23 2017-08-29 阿里巴巴集团控股有限公司 A kind of operating method of multi-level buffer, device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9390010B2 (en) * 2012-12-14 2016-07-12 Intel Corporation Cache management

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077125A (en) * 2012-12-13 2013-05-01 北京锐安科技有限公司 Self-adaption self-organizing tower type caching method for efficiently utilizing storage space
CN104519088A (en) * 2013-09-27 2015-04-15 方正宽带网络服务股份有限公司 Buffer memory system realization method and buffer memory system
CN105049530A (en) * 2015-08-24 2015-11-11 用友网络科技股份有限公司 Adaption device and method for plurality of distributed cache systems
CN107102896A (en) * 2016-02-23 2017-08-29 阿里巴巴集团控股有限公司 A kind of operating method of multi-level buffer, device and electronic equipment
CN106021414A (en) * 2016-05-13 2016-10-12 中国建设银行股份有限公司 Method and system for accessing multilevel cache parameter information
CN106886371A (en) * 2017-02-15 2017-06-23 中国保险信息技术管理有限责任公司 caching data processing method and device

Also Published As

Publication number Publication date
CN107977165A (en) 2018-05-01

Similar Documents

Publication Publication Date Title
CN107977165B (en) Data cache optimization method and device and computer equipment
US10025702B1 (en) Browser capable of saving and restoring content item state
US11080052B2 (en) Determining the effectiveness of prefetch instructions
CN111078147A (en) Processing method, device and equipment for cache data and storage medium
US8769205B2 (en) Methods and systems for implementing transcendent page caching
US8417892B1 (en) Differential storage and eviction for information resources from a browser cache
EP2797014A1 (en) Database update execution according to power management schemes
CN111966938B (en) Configuration method and system for realizing loading speed improvement of front-end page of cloud platform
CN109254804A (en) A kind of static resource loading method, device, equipment and readable storage medium storing program for executing
CN109033328A (en) A kind of access request processing method, device, equipment and readable storage medium storing program for executing
US10552334B2 (en) Systems and methods for acquiring data for loads at different access times from hierarchical sources using a load queue as a temporary storage buffer and completing the load early
US20180060241A1 (en) Instruction to query cache residency
US10621095B2 (en) Processing data based on cache residency
US10896062B2 (en) Inter-process memory management
US9189406B2 (en) Placement of data in shards on a storage device
JP6497831B2 (en) Look-ahead tag to drive out
US10129363B2 (en) Plug-in cache
CN107748649B (en) Method and device for caching data
WO2013165343A1 (en) Hidden core to fetch data
US20130151783A1 (en) Interface and method for inter-thread communication
CN109344353B (en) Configurable local cache refreshing method and terminal
US8793663B2 (en) Smart cache for a server test environment in an application development tool
US10521262B2 (en) Memory access request for a memory protocol
CN107291628B (en) Method and apparatus for accessing data storage device
US11755534B2 (en) Data caching method and node based on hyper-converged infrastructure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 100094 room 101-c18, 4th floor, building 3, yard 9, Yongfeng Road, Haidian District, Beijing

Patentee after: YONYOU FINTECH INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 100094 Room 101, building 8, yard 68, Beiqing Road, Haidian District, Beijing

Patentee before: YONYOU FINTECH INFORMATION TECHNOLOGY Co.,Ltd.