CN107977165A - Data buffer storage optimization method, device and computer equipment - Google Patents
Data buffer storage optimization method, device and computer equipment Download PDFInfo
- Publication number
- CN107977165A CN107977165A CN201711174686.6A CN201711174686A CN107977165A CN 107977165 A CN107977165 A CN 107977165A CN 201711174686 A CN201711174686 A CN 201711174686A CN 107977165 A CN107977165 A CN 107977165A
- Authority
- CN
- China
- Prior art keywords
- data
- cache
- cached
- buffer storage
- caching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
Abstract
The present invention provides a kind of data buffer storage optimization method, device and computer equipment.A kind of data buffer storage optimization method, including:Level cache and L2 cache are integrated, obtain buffer service, and buffer service is externally issued by interface mode;Cached configuration file is created for buffer service;Configured according to cached configuration file to data cached, and to data cached being cached with what is postponed.By technical scheme, if called by unified api interface, and according to cached configuration file to the data cached data buffer storage that i.e. achievable several scenes are simply provided.
Description
Technical field
The present invention relates to field of computer technology, and in particular to a kind of data buffer storage optimization method, a kind of data are delayed
Deposit optimization device, a kind of computer equipment, a kind of computer-readable recording medium.
Background technology
The operation cached at present is mainly two ways:
First, the map objects carried using program, such as the hashmap of jdk (Java development kit, jdk);
Second, using cache software, such as redis, memcached.
When the data volume that first way caches if desired is bigger or stores blob, it is easy in causing
Spilling is deposited, the cache software of the second way is often more effective only for a kind of scene, some stress data volume, some stress
Update synchronization policy.
Therefore, how a kind of buffer size for considering different scenes comprehensively is provided, and is configured simple, flexible, efficient slow
Mechanism is deposited, becomes a technical problem to be solved urgently.
The content of the invention
It is contemplated that at least solve one of technical problem present in the prior art or correlation technique.
For this reason, it is an aspect of the invention to propose a kind of data buffer storage optimization method.
Another aspect of the present invention is to propose a kind of data buffer storage optimization method.
Another aspect of the invention is to propose a kind of computer equipment.
Another aspect of the present invention is to propose a kind of computer-readable recording medium.
In view of this, an aspect of of the present present invention, it is proposed that a kind of data buffer storage optimization method, including:By level cache and
L2 cache is integrated, and obtains buffer service, and externally issue buffer service by interface mode;Created for buffer service slow
Deposit configuration file;Configured according to cached configuration file to data cached, and to data cached being cached with what is postponed.
Data buffer storage optimization method according to the present invention, is integrated two kinds of cachings by coding mode, it is preferable that one
Level caching may be configured as acquiescence caching, and two kinds of different cachings are taken out unified api interface, respective according to two kinds of cachings
Characteristic, realizes several scenes, such as OLAP (On-Line Analytical Processing, on-line analytical processing) scene, OLTP
The support of (On-Line Transaction Processing, online transaction) scene, for data cached, raising inquiry effect
Rate, cluster, which log in etc., provides powerful support;While cached configuration file is created, configuration file support xml, txt of establishment,
The diversified forms such as interface-oriented programming, very flexibly;So for application, as long as being called by unified api interface, and
According to cached configuration file to the data cached data buffer storage that i.e. achievable several scenes are simply provided.
In addition, above-mentioned data buffer storage optimization method according to the present invention, can also have technical characteristic additional as follows:
In the above-mentioned technical solutions, it is preferable that data buffer storage optimization method further includes:Show the connection attribute of L2 cache
Configuration interface, and receive the setting command of the connection attribute of input;The connection attribute of L2 cache is determined according to setting command, with
Store data cached to corresponding L2 cache or be taken out.
In the technical scheme, level cache be used to caching data volume is bigger or large object data, it is generally silent
Recognize caching, the configuration without connection attribute;And the configuration of system offer can be passed through for the data cached of OLTP scenes, user
Interface, is configured the connection attribute of L2 cache (such as redis), as redis hosts, port, password, maximum number of connections,
Maximum free time connection number, minimum idle connection number, maximum wait millisecond number etc., system is got the tool of Redis cachings automatically
Body information, so as to carry out data cached storage and reading automatically, configuration mode is simple, flexible.
In any of the above-described technical solution, it is preferable that level cache is used to cache large object data;L2 cache is used to delay
Deposit following at least any one or its combination:Results of intermediate calculations, application configuration, Session data.
In the technical scheme, large object data (i.e. OLAP scenes) is cached using level cache, it is preferable that can be by one
Level buffer setting caches for acquiescence.The general informations such as some application configurations are cached using L2 cache;Results of intermediate calculations, such as
In application scenes, some interim findings need temporarily storage, and the effect of calculating can be improved by putting it in L2 cache
Rate;And Session data, such as user's logon information, put it in L2 cache can realize user's cluster log in and
The information of quick obtaining login user at any time.By technical scheme, meet the buffer size of different application scene, only
Different application scenarios need to be directed to and carry out different configurations, it is very convenient, and be data cached, raising search efficiency, collection
Group, which logs in etc., provides powerful support.
In any of the above-described technical solution, it is preferable that level cache ehcache;L2 cache is redis.
In the technical scheme, by the way that two kinds of cachings (ehcache, redis) are integrated, cached using ehcache
Blob (i.e. OLAP scenes), some configurations, results of intermediate calculations, Session grades of result sets etc. are cached (i.e. using redis
OLTP scenes), so as to meet the buffer size of different scenes, need to only configure simple, you can easily introduce caching mechanism.
In any of the above-described technical solution, it is preferable that cached configuration file includes:The title of allocating cache data, failure
Whether preceding free time, unloading strategy, be big data caching, if synchronized clusters are synchronous, and in the memory of buffer service
The maximum quantity of caching.
In the technical scheme, cached configuration file includes:The title of allocating cache data, is usually the class of cache object
Name;Free time before allocating cache failure, is not used more than time caching and is possibly recovered;The unloading of allocating cache
Strategy, supports such as FIFO, LRU, LFU (optional), generally gives tacit consent to LFU;Whether configuration is big data caching, if synchronized clusters are same
Step;And the maximum quantity that can be deposited in configuration memory cache.So as to which based on different cache objects letter can be carried out to buffer service
Single configuration, realizes the caching of data.
Another aspect of the present invention, it is proposed that a kind of data buffer storage optimizes device, including:Buffer service unit, for inciting somebody to action
Level cache and L2 cache are integrated, and obtain buffer service, and externally issue buffer service by interface mode;First matches somebody with somebody
Unit is put, for creating cached configuration file for buffer service;Processing unit, for according to cached configuration file to data cached
Configured, and to data cached being cached with what is postponed.
Data buffer storage optimization device according to the present invention, is integrated two kinds of cachings by coding mode, it is preferable that one
Level caching may be configured as acquiescence caching, and two kinds of different cachings are taken out unified api interface, respective according to two kinds of cachings
Characteristic, realizes the support of several scenes (such as OLAP scenes, OLTP scenes), is logged in for data cached, raising search efficiency, cluster
Etc. providing powerful support;Cached configuration file is created at the same time, and the configuration file of establishment is supported xml, txt, compiled towards interface
The diversified forms such as journey, very flexibly;So for application, as long as being called by unified api interface, and matched somebody with somebody according to caching
File is put to the data cached data buffer storage that i.e. achievable several scenes are simply provided.
In the above-mentioned technical solutions, it is preferable that data buffer storage optimization device further includes:Second dispensing unit, for showing
The connection attribute configuration interface of L2 cache, and receive the setting command of the connection attribute of input;Processing unit, is additionally operable to basis
Setting command determines the connection attribute of L2 cache, stores data cached to corresponding L2 cache or is taken out.
In the technical scheme, level cache be used to caching data volume is bigger or large object data, it is generally silent
Recognize caching, the configuration without connection attribute;And the configuration of system offer can be passed through for the data cached of OLTP scenes, user
Interface, is configured the connection attribute of L2 cache (such as redis), as redis hosts, port, password, maximum number of connections,
Maximum free time connection number, minimum idle connection number, maximum wait millisecond number etc., system is got the tool of Redis cachings automatically
Body information, so as to carry out data cached storage and reading automatically, configuration mode is simple, flexible.
In any of the above-described technical solution, it is preferable that level cache is used to cache large object data;L2 cache is used to delay
Deposit following at least any one or its combination:Results of intermediate calculations, application configuration, Session data.
In the technical scheme, large object data (i.e. OLAP scenes) is cached using level cache, it is preferable that can be by one
Level buffer setting caches for acquiescence.The general informations such as some application configurations are cached using L2 cache;Results of intermediate calculations, such as
In application scenes, some interim findings need temporarily storage, and the effect of calculating can be improved by putting it in L2 cache
Rate;And Session data, such as user's logon information, put it in L2 cache can realize user's cluster log in and
The information of quick obtaining login user at any time.By technical scheme, meet the buffer size of different application scene, only
Different application scenarios need to be directed to and carry out different configurations, it is very convenient, and be data cached, raising search efficiency, collection
Group, which logs in etc., provides powerful support.
In any of the above-described technical solution, it is preferable that level cache ehcache;L2 cache is redis.
In the technical scheme, by the way that two kinds of cachings (ehcache, redis) are integrated, cached using ehcache
Blob (i.e. OLAP scenes), some configurations, results of intermediate calculations, Session grades of result sets etc. are cached (i.e. using redis
OLTP scenes), so as to meet the buffer size of different scenes, need to only configure simple, you can easily introduce caching mechanism.
In any of the above-described technical solution, it is preferable that cached configuration file includes:The title of allocating cache data, failure
Whether preceding free time, unloading strategy, be big data caching, if synchronized clusters are synchronous, and in the memory of buffer service
The maximum quantity of caching.
In the technical scheme, cached configuration file includes:The title of allocating cache data, is usually the class of cache object
Name;Free time before allocating cache failure, is not used more than time caching and is possibly recovered;The unloading of allocating cache
Strategy, supports such as FIFO, LRU, LFU (optional), generally gives tacit consent to LFU;Whether configuration is big data caching, if synchronized clusters are same
Step;And the maximum quantity that can be deposited in configuration memory cache.So as to which based on different cache objects letter can be carried out to buffer service
Single configuration, realizes the caching of data.
Another aspect of the invention, it is proposed that a kind of computer equipment, including memory, processor and it is stored in memory
Computer program that is upper and can running on a processor, processor are used to perform such as any one of any of the above-described technical solution method
The step of.
Computer equipment according to the present invention, its processor included are used to perform such as number in any of the above-described technical solution
The step of according to cache optimization method, thus the computer equipment can realize the data buffer storage optimization method all beneficial to effect
Fruit, details are not described herein.
Another aspect of the present invention, it is proposed that a kind of computer-readable recording medium, is stored thereon with computer program, meter
Calculation machine program is realized when being executed by processor such as the step of any one of any of the above-described technical solution method.
Computer-readable recording medium according to the present invention, the computer program stored thereon are realized when being executed by processor
As in any of the above-described technical solution the step of data buffer storage optimization method, thus the computer-readable recording medium can be realized
Whole beneficial effects of the data buffer storage optimization method, details are not described herein.
The additional aspect and advantage of the present invention will become obvious in following description section, or the practice by the present invention
Recognize.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination accompanying drawings below to embodiment
Substantially and it is readily appreciated that, wherein:
Fig. 1 shows the flow diagram of data buffer storage optimization method according to an embodiment of the invention;
Fig. 2 shows the flow diagram of data buffer storage optimization method according to another embodiment of the invention;
Fig. 3 shows the schematic block diagram of data buffer storage optimization device according to an embodiment of the invention;
Fig. 4 shows the schematic block diagram of data buffer storage optimization device according to another embodiment of the invention;
Fig. 5 shows the principle schematic of the data buffer storage optimization device of a specific embodiment according to the present invention;
Fig. 6 shows the schematic diagram of the connection attribute configuration interface of the Redis of a specific embodiment according to the present invention;
Fig. 7 shows the schematic diagram of computer equipment according to an embodiment of the invention.
Embodiment
It is to better understand the objects, features and advantages of the present invention, below in conjunction with the accompanying drawings and specific real
Mode is applied the present invention is further described in detail.It should be noted that in the case where there is no conflict, the implementation of the application
Feature in example and embodiment can be mutually combined.
Many details are elaborated in the following description to facilitate a thorough understanding of the present invention, still, the present invention may be used also
To be implemented using other different from other modes described here, therefore, protection scope of the present invention and from described below
Specific embodiment limitation.
As shown in Figure 1, the flow diagram of data buffer storage optimization method according to an embodiment of the invention.Wherein,
The data buffer storage optimization method, including:
Step 102, level cache and L2 cache are integrated, obtains buffer service, and it is external by interface mode
Issue buffer service;
Step 104, cached configuration file is created for buffer service;
Step 106, configured according to cached configuration file to data cached, and to data cached delaying with what is postponed
Deposit.
Data buffer storage optimization method provided by the invention, is integrated two kinds of cachings by coding mode, it is preferable that one
Level caching may be configured as acquiescence caching, and two kinds of different cachings are taken out unified api interface, respective according to two kinds of cachings
Characteristic, realizes the support of several scenes (such as OLAP scenes, OLTP scenes), is logged in for data cached, raising search efficiency, cluster
Etc. providing powerful support;Cached configuration file is created at the same time, and the configuration file of establishment is supported xml, txt, compiled towards interface
The diversified forms such as journey, very flexibly;So for application, as long as being called by unified api interface, and matched somebody with somebody according to caching
File is put to the data cached data buffer storage that i.e. achievable several scenes are simply provided.
As shown in Fig. 2, the flow diagram of data buffer storage optimization method according to another embodiment of the invention.Its
In, the data buffer storage optimization method, including:
Step 202, level cache and L2 cache are integrated, obtains buffer service, and it is external by interface mode
Issue buffer service;
Step 204, cached configuration file is created for buffer service, to be configured to data cached;
Step 206, show the connection attribute configuration interface of L2 cache, and receive the setting life of the connection attribute of input
Order;
Step 208, the connection attribute of L2 cache is determined according to setting command;
Step 210, configured, data cached will be stored with what is postponed to phase to data cached according to cached configuration file
The L2 cache answered is taken out.
In this embodiment, level cache be used to caching data volume is bigger or large object data, generally give tacit consent to
Caching, the configuration without connection attribute;And configuration circle of system offer can be passed through for the data cached of OLTP scenes, user
Face, is configured the connection attribute of L2 cache (such as redis), such as redis hosts, port, password, maximum number of connections, most
Big free time connection number, minimum idle connection number, maximum wait millisecond number etc., system is got the specific of Redis cachings automatically
Information, so as to carry out data cached storage and reading automatically, configuration mode is simple, flexible.
In any of the above-described embodiment, it is preferable that level cache is used to cache large object data;L2 cache is used to cache
Below any one of at least or its combination:Results of intermediate calculations, application configuration, Session data.
In this embodiment, large object data (i.e. OLAP scenes) is cached using level cache, it is preferable that can be by level-one
Buffer setting caches for acquiescence.The general informations such as some application configurations are cached using L2 cache;Results of intermediate calculations, such as exists
In application scenes, some interim findings need temporarily storage, and the efficiency of calculating can be improved by putting it in L2 cache;
And Session data, such as user's logon information, putting it in L2 cache can realize that user's cluster logs in and at any time
The information of quick obtaining login user.By technical scheme, meet the buffer size of different application scene, only need pin
Different application scenarios are carried out with different configurations, it is very convenient, and stepped on for data cached, raising search efficiency, cluster
Land etc. provides powerful support.
In any of the above-described embodiment, it is preferable that level cache ehcache;L2 cache is redis.
In this embodiment, by the way that two kinds of cachings (ehcache, redis) are integrated, cached using ehcache big
Object (i.e. OLAP scenes), some configurations, results of intermediate calculations, Session grades of result sets etc. are cached (i.e. using redis
OLTP scenes), so as to meet the buffer size of different scenes, need to only configure simple, you can easily introduce caching mechanism.
In any of the above-described embodiment, it is preferable that cached configuration file includes:Before the titles of allocating cache data, failure
Free time, unloading strategy, whether be big data caching, if synchronized clusters are synchronous, and delay in the memory of buffer service
The maximum quantity deposited.
In this embodiment, cached configuration file includes:The title of allocating cache data, is usually the class of cache object
Name;Free time before allocating cache failure, is not used more than time caching and is possibly recovered;The unloading of allocating cache
Strategy, supports such as FIFO, LRU, LFU (optional), generally gives tacit consent to LFU;Whether configuration is big data caching, if synchronized clusters are same
Step;And the maximum quantity that can be deposited in configuration memory cache.So as to which based on different cache objects letter can be carried out to buffer service
Single configuration, realizes the caching of data.
As shown in figure 3, the schematic block diagram of data buffer storage optimization device according to an embodiment of the invention.Wherein, should
Data buffer storage optimizes device 300, including:
Buffer service unit 302, for level cache and L2 cache to be integrated, obtains buffer service, and pass through
Interface mode externally issues buffer service;
First dispensing unit 304, for creating cached configuration file for buffer service;
Processing unit 306, for being configured according to cached configuration file to data cached, and to the caching number postponed
According to being cached.
Data buffer storage provided by the invention optimizes device 300, is integrated two kinds of cachings by coding mode, preferably
Ground, level cache may be configured as acquiescence caching, and two kinds of different cachings are taken out unified api interface, according to two kinds of cachings
Respective characteristic, realizes the support of several scenes (such as OLAP scenes, OLTP scenes), for data cached, raising search efficiency, collection
Group, which logs in etc., provides powerful support;Create cached configuration file at the same time, the configuration file of establishment support xml, txt, towards
The diversified forms such as interface programming, very flexibly;So for application, as long as being called by unified api interface, and according to
Cached configuration file is to the data cached data buffer storage that i.e. achievable several scenes are simply provided.
As shown in figure 4, the schematic block diagram of data buffer storage optimization device according to another embodiment of the invention.Wherein,
The data buffer storage optimizes device 400, including:
Buffer service unit 402, for level cache and L2 cache to be integrated, obtains buffer service, and pass through
Interface mode externally issues buffer service;
First dispensing unit 404, for creating cached configuration file for buffer service;
Processing unit 406, for being configured according to cached configuration file to data cached, and to the caching number postponed
According to being cached;
Data buffer storage optimization device 400 further includes:
Second dispensing unit 408, for showing the connection attribute configuration interface of L2 cache, and receives the connection category of input
The setting command of property;
Processing unit 406, is additionally operable to determine the connection attribute of L2 cache according to setting command, by data cached storage
To corresponding L2 cache or it is taken out.
In this embodiment, level cache be used to caching data volume is bigger or large object data, generally give tacit consent to
Caching, the configuration without connection attribute;And configuration circle of system offer can be passed through for the data cached of OLTP scenes, user
Face, is configured the connection attribute of L2 cache (such as redis), such as redis hosts, port, password, maximum number of connections, most
Big free time connection number, minimum idle connection number, maximum wait millisecond number etc., system is got the specific of Redis cachings automatically
Information, so as to carry out data cached storage and reading automatically, configuration mode is simple, flexible.
In any of the above-described embodiment, it is preferable that level cache is used to cache large object data;L2 cache is used to cache
Below any one of at least or its combination:Results of intermediate calculations, application configuration, Session data.
In this embodiment, large object data (i.e. OLAP scenes) is cached using level cache, it is preferable that can be by level-one
Buffer setting caches for acquiescence.The general informations such as some application configurations are cached using L2 cache;Results of intermediate calculations, such as exists
In application scenes, some interim findings need temporarily storage, and the efficiency of calculating can be improved by putting it in L2 cache;
And Session data, such as user's logon information, putting it in L2 cache can realize that user's cluster logs in and at any time
The information of quick obtaining login user.By technical scheme, meet the buffer size of different application scene, only need pin
Different application scenarios are carried out with different configurations, it is very convenient, and stepped on for data cached, raising search efficiency, cluster
Land etc. provides powerful support.
In any of the above-described embodiment, it is preferable that level cache ehcache;L2 cache is redis.
In this embodiment, by the way that two kinds of cachings (ehcache, redis) are integrated, cached using ehcache big
Object (i.e. OLAP scenes), some configurations, results of intermediate calculations, Session grades of result sets etc. are cached (i.e. using redis
OLTP scenes), so as to meet the buffer size of different scenes, need to only configure simple, you can easily introduce caching mechanism.
In any of the above-described embodiment, it is preferable that cached configuration file includes:Before the titles of allocating cache data, failure
Free time, unloading strategy, whether be big data caching, if synchronized clusters are synchronous, and delay in the memory of buffer service
The maximum quantity deposited.
In this embodiment, cached configuration file includes:The title of allocating cache data, is usually the class of cache object
Name;Free time before allocating cache failure, is not used more than time caching and is possibly recovered;The unloading of allocating cache
Strategy, supports such as FIFO, LRU, LFU (optional), generally gives tacit consent to LFU;Whether configuration is big data caching, if synchronized clusters are same
Step;And the maximum quantity that can be deposited in configuration memory cache.So as to which based on different cache objects letter can be carried out to buffer service
Single configuration, realizes the caching of data.
A kind of specific embodiment, there is provided caching mechanism for adapting to different scenes.By by two kinds caching (ehcache,
Redis) integrated, using ehcache caching blobs (OLAP scenes), some configurations, middle meter are cached using redis
Calculate (OLTP scenes) such as result, Session grades of result sets.Specific design principle is as shown in Figure 5.Wherein,
POJO:Object type, is stored in DefaultCache, by code wrap, can cache any realize and serialize
Object, in data analysis scene, object data of analysis etc..
Results of intermediate calculations:It is stored in RedisCache, in application scenes, some interim findings need temporarily to deposit
Storage, is put into the efficiency that calculating can be improved in caching.
Archives/configuration:It is stored in RedisCache, general information such as application configuration etc. is stored into caching.
Session grades of result sets:The relevant information of Session ranks is saved in RedisCache, as user logs in letter
Breath, realizes that user's cluster logs in and the information of quick obtaining login user at any time.
Buffer service:Buffer service is externally issued by interface mode, IRedisCache is issued as buffer service,
ICache is as the specific implementation of cache interface, DefaultCache (Ehcache) and RedisCache as caching.
Create cache way and support xml, be both the configuration to a data buffer storage, including such as:The title of caching;Caching loses
Free time before effect, is not used more than time caching and is possibly recovered;The maximum quantity cached in memory, it is optional,
Acquiescence 5000;Caching unloading strategy, supports FIFO, LRU, LFU, optional, gives tacit consent to LRU;Whether it is big data caching;It is whether synchronous
Cluster synchronization, gives tacit consent to false.
The connection attribute configuration interface of Redis is created, can specifically as shown in fig. 6, by the connection attribute configuration interface
To get the specifying information of the Redis of user setting cachings, such as redis hosts, port, password, maximum number of connections, maximum sky
Spare time connects number, minimum idle connection number, maximum wait millisecond number etc., so as to fulfill data cached storage and reading is carried out, matches somebody with somebody
It is simple, flexible to put mode.
In this embodiment, a kind of buffer size, height for considering different scenes comprehensively is provided based on ehcache, redis
The caching mechanism of effect.Two kinds of cachings are integrated by coding mode, blob (OLAP scenes) is cached using ehcache,
(OLTP scenes) such as some configurations, results of intermediate calculations, Session grades of result sets is cached using redis, and configuration circle is provided
Face, this caching mechanism can be introduced by carrying out different configurations for different application scenarios.Based on this caching mechanism, for DI
The product such as (data integration), advanced analysis, BQ may be introduced into this caching mechanism, thus for it is data cached, improve search efficiency,
Cluster, which logs in etc., provides powerful support.
As shown in fig. 7, the schematic diagram of computer equipment according to an embodiment of the invention.Wherein, which sets
Standby 1, including memory 12, processor 14 and the computer program that can be run on memory 12 and on processor 14 is stored in,
Processor is used to perform such as the step of any one of above-described embodiment method.
Computer equipment 1 provided by the invention, its processor 14 included are used to perform as in above-mentioned any embodiment
The step of data buffer storage optimization method, thus the computer equipment can realize the data buffer storage optimization method all beneficial to effect
Fruit, details are not described herein.
Another aspect of the present invention, it is proposed that a kind of computer-readable recording medium, is stored thereon with computer program, meter
Calculation machine program is realized when being executed by processor such as the step of any one of above-described embodiment method.
Computer-readable recording medium provided by the invention, the computer program stored thereon are realized when being executed by processor
As in above-mentioned any embodiment the step of data buffer storage optimization method, thus the computer-readable recording medium can realize this
Whole beneficial effects of data buffer storage optimization method, details are not described herein.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the invention, for the skill of this area
For art personnel, the invention may be variously modified and varied.Within the spirit and principles of the invention, that is made any repaiies
Change, equivalent substitution, improvement etc., should all be included in the protection scope of the present invention.
Claims (12)
- A kind of 1. data buffer storage optimization method, it is characterised in that including:Level cache and L2 cache are integrated, obtain buffer service, and the caching is externally issued by interface mode Service;Cached configuration file is created for the buffer service;Configured according to the cached configuration file to data cached, and to described data cached being cached with what is postponed.
- 2. data buffer storage optimization method according to claim 1, it is characterised in that the data buffer storage optimization method also wraps Include:Show the connection attribute configuration interface of the L2 cache, and receive the setting command of the connection attribute of input;The connection attribute of the L2 cache is determined according to the setting command, data cached is stored described to corresponding institute State L2 cache or be taken out.
- 3. data buffer storage optimization method according to claim 2, it is characterised in thatThe level cache is used to cache large object data;The L2 cache is used to cache following at least any one or its combination:Results of intermediate calculations, application configuration, Session Data.
- 4. data buffer storage optimization method according to claim 3, it is characterised in thatThe level cache is ehcache;The L2 cache is redis.
- 5. data buffer storage optimization method according to any one of claim 1 to 4, it is characterised in thatThe cached configuration file includes:Configure the free time before the data cached title, failure, unloading strategy, be No is big data caching, if synchronized clusters are synchronous, and the maximum quantity cached in the memory of the buffer service.
- 6. a kind of data buffer storage optimizes device, it is characterised in that including:Buffer service unit, for level cache and L2 cache to be integrated, obtains buffer service, and pass through interface mode Externally issue the buffer service;First dispensing unit, for creating cached configuration file for the buffer service;Processing unit, for being configured according to the cached configuration file to data cached, and to the caching postponed Data are cached.
- 7. data buffer storage according to claim 6 optimizes device, it is characterised in that the data buffer storage optimization device also wraps Include:Second dispensing unit, for showing the connection attribute configuration interface of the L2 cache, and receives the connection of input The setting command of attribute;The processing unit, is additionally operable to determine the connection attribute of the L2 cache according to the setting command, will be described slow Deposit data is stored to corresponding second caching or is taken out.
- 8. data buffer storage according to claim 7 optimizes device, it is characterised in thatThe level cache is used to cache large object data;The L2 cache is used to cache following at least any one or its combination:Results of intermediate calculations, application configuration, Session Data.
- 9. data buffer storage according to claim 8 optimizes device, it is characterised in thatThe level cache is ehcache;The L2 cache is redis.
- 10. the data buffer storage optimization device according to any one of claim 6 to 9, it is characterised in thatThe cached configuration file includes:Configure the free time before the data cached title, failure, unloading strategy, be No is big data caching, if synchronized clusters are synchronous, and the maximum quantity cached in the memory of the buffer service.
- 11. a kind of computer equipment, including memory, processor and it is stored on the memory and can be on the processor The computer program of operation, it is characterised in that the processor is used to perform the method as any one of claim 1 to 5 The step of.
- 12. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program The step of method as any one of claim 1 to 5 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711174686.6A CN107977165B (en) | 2017-11-22 | 2017-11-22 | Data cache optimization method and device and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711174686.6A CN107977165B (en) | 2017-11-22 | 2017-11-22 | Data cache optimization method and device and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107977165A true CN107977165A (en) | 2018-05-01 |
CN107977165B CN107977165B (en) | 2021-01-08 |
Family
ID=62011065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711174686.6A Active CN107977165B (en) | 2017-11-22 | 2017-11-22 | Data cache optimization method and device and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107977165B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271139A (en) * | 2018-09-11 | 2019-01-25 | 北京北信源软件股份有限公司 | A kind of method of standardization management and device based on caching middleware |
CN110825705A (en) * | 2019-11-22 | 2020-02-21 | 广东浪潮大数据研究有限公司 | Data set caching method and related device |
CN112948336A (en) * | 2021-03-30 | 2021-06-11 | 联想凌拓科技有限公司 | Data acceleration method, cache unit, electronic device and storage medium |
CN113205666A (en) * | 2021-05-06 | 2021-08-03 | 广东鹰视能效科技有限公司 | Early warning method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077125A (en) * | 2012-12-13 | 2013-05-01 | 北京锐安科技有限公司 | Self-adaption self-organizing tower type caching method for efficiently utilizing storage space |
US20140173221A1 (en) * | 2012-12-14 | 2014-06-19 | Ahmad Samih | Cache management |
CN104519088A (en) * | 2013-09-27 | 2015-04-15 | 方正宽带网络服务股份有限公司 | Buffer memory system realization method and buffer memory system |
CN105049530A (en) * | 2015-08-24 | 2015-11-11 | 用友网络科技股份有限公司 | Adaption device and method for plurality of distributed cache systems |
CN106021414A (en) * | 2016-05-13 | 2016-10-12 | 中国建设银行股份有限公司 | Method and system for accessing multilevel cache parameter information |
CN106886371A (en) * | 2017-02-15 | 2017-06-23 | 中国保险信息技术管理有限责任公司 | caching data processing method and device |
CN107102896A (en) * | 2016-02-23 | 2017-08-29 | 阿里巴巴集团控股有限公司 | A kind of operating method of multi-level buffer, device and electronic equipment |
-
2017
- 2017-11-22 CN CN201711174686.6A patent/CN107977165B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103077125A (en) * | 2012-12-13 | 2013-05-01 | 北京锐安科技有限公司 | Self-adaption self-organizing tower type caching method for efficiently utilizing storage space |
US20140173221A1 (en) * | 2012-12-14 | 2014-06-19 | Ahmad Samih | Cache management |
CN104519088A (en) * | 2013-09-27 | 2015-04-15 | 方正宽带网络服务股份有限公司 | Buffer memory system realization method and buffer memory system |
CN105049530A (en) * | 2015-08-24 | 2015-11-11 | 用友网络科技股份有限公司 | Adaption device and method for plurality of distributed cache systems |
CN107102896A (en) * | 2016-02-23 | 2017-08-29 | 阿里巴巴集团控股有限公司 | A kind of operating method of multi-level buffer, device and electronic equipment |
CN106021414A (en) * | 2016-05-13 | 2016-10-12 | 中国建设银行股份有限公司 | Method and system for accessing multilevel cache parameter information |
CN106886371A (en) * | 2017-02-15 | 2017-06-23 | 中国保险信息技术管理有限责任公司 | caching data processing method and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271139A (en) * | 2018-09-11 | 2019-01-25 | 北京北信源软件股份有限公司 | A kind of method of standardization management and device based on caching middleware |
CN110825705A (en) * | 2019-11-22 | 2020-02-21 | 广东浪潮大数据研究有限公司 | Data set caching method and related device |
CN112948336A (en) * | 2021-03-30 | 2021-06-11 | 联想凌拓科技有限公司 | Data acceleration method, cache unit, electronic device and storage medium |
CN112948336B (en) * | 2021-03-30 | 2023-01-03 | 联想凌拓科技有限公司 | Data acceleration method, cache unit, electronic device and storage medium |
CN113205666A (en) * | 2021-05-06 | 2021-08-03 | 广东鹰视能效科技有限公司 | Early warning method |
Also Published As
Publication number | Publication date |
---|---|
CN107977165B (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107977165A (en) | Data buffer storage optimization method, device and computer equipment | |
US20230246926A1 (en) | System and method of browsing offline and queried content | |
CN109032521B (en) | Storage volume creation method, device, server and storage medium | |
CN102567224B (en) | Efficient high-speed cache management | |
US20140310474A1 (en) | Methods and systems for implementing transcendent page caching | |
CN100362462C (en) | Method for managing magnetic disk array buffer storage | |
US8769205B2 (en) | Methods and systems for implementing transcendent page caching | |
WO2021120792A1 (en) | Video playback method and apparatus, electronic device, and storage medium | |
CN106649349A (en) | Method, device and system for data caching, applicable to game application | |
CN107153643B (en) | Data table connection method and device | |
CN101470645A (en) | High-speed cache data recovery method and apparatus | |
CN103069391A (en) | Enabling control to a hypervisor in a cloud computing environment | |
CN106648938B (en) | Linux system application program memory management method and system | |
CN110007978A (en) | A kind of method, device and equipment preloading the page | |
US20230046716A1 (en) | Document editing method and apparatus, computer device, and storage medium | |
US20170004087A1 (en) | Adaptive cache management method according to access characteristics of user application in distributed environment | |
CN104981786A (en) | Prefetching for parent core in multi-core chip | |
US9582427B2 (en) | Implementing selective cache injection | |
CN101217449B (en) | A remote call management procedure | |
CN105138473A (en) | System and method for managing cache | |
CN108650334A (en) | A kind of setting method and device of session failed | |
CN102546674A (en) | Directory tree caching system and method based on network storage device | |
CN111966938B (en) | Configuration method and system for realizing loading speed improvement of front-end page of cloud platform | |
CN104484136B (en) | A kind of method of sustainable high concurrent internal storage data | |
US20220019428A1 (en) | Modification of application functionality using object-oriented configuration data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder |
Address after: 100094 room 101-c18, 4th floor, building 3, yard 9, Yongfeng Road, Haidian District, Beijing Patentee after: YONYOU FINTECH INFORMATION TECHNOLOGY Co.,Ltd. Address before: 100094 Room 101, building 8, yard 68, Beiqing Road, Haidian District, Beijing Patentee before: YONYOU FINTECH INFORMATION TECHNOLOGY Co.,Ltd. |
|
CP02 | Change in the address of a patent holder |