CN109325054A - Data processing method, system and storage medium based on caching - Google Patents
Data processing method, system and storage medium based on caching Download PDFInfo
- Publication number
- CN109325054A CN109325054A CN201810813070.7A CN201810813070A CN109325054A CN 109325054 A CN109325054 A CN 109325054A CN 201810813070 A CN201810813070 A CN 201810813070A CN 109325054 A CN109325054 A CN 109325054A
- Authority
- CN
- China
- Prior art keywords
- data
- cache database
- database
- cached
- cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides a kind of data processing method based on caching, system and storage medium, which comprises when user accesses system for the first time, will be stored in deposit data in cache database as data cached;When the user accesses system again, whether inquire the cache database has that the user's is data cached, if the cache database is stored with the data cached of the user, then judge whether the main memory data in the data cached and primary database in the cache database are consistent, if consistent, it then be used directly described data cached, if it is inconsistent, the main memory data are cached again in the cache database.The present invention utilizes caching technology, by the high data buffer storage of access frequency in the cache database of lightweight, access efficiency is improved, and reduce the access times of large-scale primary database, to promote the overall performance and availability of data of data cached processing.
Description
Technical field
The present invention relates to data processing field, in particular to a kind of data processing method based on caching, system and
Storage medium.
Background technique
With the development of information technology, enterprise is increasingly dependent on information system management, and the data information of each service application is main
It stores in the database.And with the growth of business event, one side data scale is increasing, another aspect data access
Frequency is also higher and higher, and database loads are higher and higher, and enterprise is also higher and higher to the performance requirement of these data access.
General Database performance tuning mode includes: adjustment data structure, optimization SQL statement, adjustment memory hard disk etc.
The distribution etc. of hardware resource.But for some large-scale corporations, only it has been difficult by general Database performance tuning mode full
The needs that sufficient business increases.And the mode of hardware investment is increased, and will increase the construction cost of enterprise information system.
Meanwhile being related to the industry of national economy or the key business in field for government, telecommunications, finance, the energy, military project etc.
High Availabitity is required for critical data storage, in order to avoid because the interruption of data leads to various losses, the height of database can
Among in the weight for having become IT application in enterprise.Caching technology is widely used the storage and processing in computer system
In, such as interior caching for saving as hard disk, in addition there are L2 caches etc..Its feature is that buffer memory capacity is generally much smaller than primary storage and sets
It is standby, but treatment effeciency is higher.Data are first put into caching from main storage device, then read in the buffer in processing by system
Data are taken and handle, to achieve the purpose that quick response.But currently, there is not the database caches processing of application temporarily
The relevant technologies.
Summary of the invention
It is situated between in order to solve the above technical problems, the present invention provides a kind of data processing method based on caching, system and storage
Matter solves the problem of data buffer storage low efficiency and data poor availability after the growth of current data amount of storage.
According to a first aspect of the embodiments of the present invention, a kind of data processing method based on caching, the method are provided
Include:
When user accesses system for the first time, will be stored in deposit data in cache database as data cached;
Whether when the user accesses system again, inquiring the cache database has that the user's is data cached,
If the cache database is stored with the data cached of the user, the caching in the cache database is judged
Whether data and the main memory data in primary database are consistent,
If consistent, it then be used directly described data cached, if it is inconsistent, by the main memory data in the caching
It is cached again in database.
According to a second aspect of the embodiments of the present invention, a kind of data processing system based on caching, the system packet are provided
It includes:
Cache database, for conduct in cache database will to be stored in deposit data when user accesses system for the first time
It is data cached,
Primary database, for storing main memory data;
Wherein, described when user accesses system again, inquire the caching the number whether cache database has the user
According to judging data cached in the cache database if the cache database is stored with the data cached of the user
It is whether consistent with the main memory data in primary database, if unanimously, then be used directly described data cached, if it is inconsistent,
The main memory data are cached again in the cache database.
According to a third aspect of the embodiments of the present invention, a kind of computer readable storage medium, the computer storage are provided
Medium includes computer program, wherein the computer program makes described one when being executed by one or more computers
A or multiple computers perform the following operations:
The operation include the steps that it is any one of as above described in the data processing method based on caching included.
Implement a kind of data processing method based on caching, system and storage medium provided in an embodiment of the present invention, has
Following advantages: can be obviously improved service process performance, mitigate database load, ensure that the database under extensive high concurrent
Processing capacity, while improving the availability of system.
Detailed description of the invention
Fig. 1 is a kind of flow chart of data processing method based on caching of the embodiment of the present invention;
Fig. 2 is the flow chart of another data processing method based on caching of the embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of data processing system 1 based on caching of the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of cache database 100 described in system 1 described in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of primary database 200 described in system 1 described in the embodiment of the present invention.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present invention clearer, below in conjunction with attached drawing to this hair
It is bright to be described in further detail.
Fig. 1 is a kind of flow chart of data processing method based on caching of the embodiment of the present invention;Referring to Fig. 1, the side
Method includes:
Step S1 will be stored in cache database as data cached when user accesses system for the first time to deposit data;
Step S2 inquires the caching whether cache database has the user when the user accesses system again
Data,
Step S3 judges the cache database if the cache database is stored with the data cached of the user
In it is data cached whether consistent with main memory data in primary database,
Step S4, if unanimously, then be used directly described data cached, if it is inconsistent, the main memory data are existed
It is cached again in the cache database.
In embodiments of the present invention, in the data cached and primary database in the judgement cache database
Whether data unanimously include: to be judged described data cached by comparing the data cached characteristic value with the main memory data
Whether the data in data cached and primary database in library are consistent.
In embodiments of the present invention, the method also includes: according to the cache database record data access
Time periodically deletes the data not accessed for a long time.
In embodiments of the present invention, the method also includes: if it is described do not exist to deposit data it is described data cached
In library, then described it will be first stored in the primary database to deposit data, then cache into the cache database.
The present invention utilizes caching technology, by the high data buffer storage of access frequency in the cache database of lightweight, improves
Access efficiency, and the access times of large-scale primary database are reduced, to promote the overall performance sum number of data cached processing
According to availability.By establishing the cache database of a lightweight, variation less frequently, transaction need to repeatedly access reading, from
The data for very expending resource and performance are read in source, are all put into cache database.
Fig. 2 is the flow chart of another data processing method based on caching of the embodiment of the present invention;Referring to fig. 2, described
Method includes the following steps:
Firstly, query caching database is with the presence or absence of data cached after user's access;If it is, judging cache database
In whether have that this is data cached, if it is not, then directly storing or accessing primary database;Then, judge data cached and main memory
Whether data consistent, if it is, access cache database, if it is not, then first store or access primary database, then cache to
Cache database;The query result that finally cache database is inquired exports.
In embodiments of the present invention, the data that each page can use in an operation flow, such as number of users
According to etc., it is put into cache database when accessing first time, subsequent page only needs access cache.When same user accesses again
When system, first access cache database, if in the existing cache database of data, judging in data cached and primary database
Whether data are consistent, inconsistent to cache again if it is data cached unanimously to then be used directly.Data cached consistency passes through spy
The mode of value indicative guarantees, to main notebook data, calculates its characteristic value, and be stored in cache database.When data change
When, characteristic value can also change, and access at this time is needed to read data from primary database again and be cached.
In embodiments of the present invention, cache database records data access time, and timing cleaning does not access for a long time
Data, guarantee database access efficiency.The access time value of data scrubbing need to be weighed as the case may be.When cleaning
Between when being worth excessive, buffer data size increases, and the purpose for accelerating the response time is not achieved;When clearance time value is too small, frequently
Again additional resource can all be expended by caching and clear up excessive data volume, and database performance is caused to reduce instead.
In embodiments of the present invention, the data list structure of primary database and cache database is consistent.When main number
When breaking down according to library, the data copy in cache database can guarantee that most of data access is unaffected.When caching number
When breaking down according to library, transaction can also be directly switch to the data read in primary database.Database service surprisingly stops in this way
When, it remains to normally provide support to user within a certain period of time, improves the availability of system.
Fig. 3 is a kind of structural schematic diagram of data processing system 1 based on caching of the embodiment of the present invention;Referring to Fig. 3, institute
The system of stating includes:
Cache database 100, for that will be stored in cache database and make to deposit data when user accesses system for the first time
To be data cached,
Primary database 200, for storing main memory data;
Wherein, described when user accesses system again, inquire the caching whether cache database 100 has the user
Data judge in the cache database 100 if the cache database 100 is stored with the data cached of the user
It is data cached whether consistent with main memory data in primary database 200, if unanimously, then be used directly described data cached, such as
Fruit is inconsistent, then caches the main memory data again in the cache database 100.
Fig. 4 is the structural schematic diagram of cache database 100 described in system 1 described in the embodiment of the present invention;Referring to fig. 4, institute
Stating cache database 100 includes:
Enquiry module 110, for data cached in cache database described in user query;
Memory module 120 stores data cached in the cache database for user;
Judgment module 130, for judging described slow by comparing the data cached characteristic value with the main memory data
Whether the data in data cached and primary database in deposit data library are consistent.
In embodiments of the present invention, the cache database may also include that
Removing module is periodically deleted and is not accessed for a long time for recording the access time of data according to the cache database
Data.
Fig. 5 is the structural schematic diagram of primary database 200 described in system 1 described in the embodiment of the present invention;It is described referring to Fig. 5
Primary database 200 includes:
Memory module 210, if not existing in the cache database for described to deposit data, first by the number to be deposited
The main memory data are used as according to being stored in the primary database;
Unloading module 220, then by the main memory data buffer storage into the cache database as described data cached.
The present invention utilizes caching technology, and the data high to access frequency are buffered in the cache database of lightweight,
Access efficiency is improved, and for large-scale primary database, access times are reduced, thus improving performance.Cache database is simultaneously
It is also a backup of primary database, when primary database breaks down, cache database can guarantee most of data access
It is unaffected, improve the availability of system.
In addition, the computer storage medium includes to calculate the present invention also provides a kind of computer readable storage medium
Machine program, which is characterized in that the computer program makes one or more of when being executed by one or more computers
Computer performs the following operations: the operation includes the steps that the data processing method based on caching is included as described above,
This is repeated no more.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can be by
The mode of software combination hardware platform is realized.Based on this understanding, technical solution of the present invention makes tribute to background technique
That offers can be embodied in the form of software products in whole or in part, which can store is situated between in storage
In matter, such as ROM/RAM, magnetic disk, CD, including some instructions use is so that a computer equipment (can be individual calculus
Machine, server or network equipment etc.) execute method described in certain parts of each embodiment of the present invention or embodiment.
The above disclosure is only a preferred embodiment of the invention, cannot limit protection of the invention certainly with this
Range, therefore is still fallen within by right of the present invention and is wanted for equivalent variations made by above-described embodiment according to the introduction of the claims in the present invention
It asks in the range of being covered.
Claims (9)
1. a kind of data processing method based on caching, which is characterized in that the described method includes:
When user accesses system for the first time, will be stored in deposit data in cache database as data cached;
Whether when the user accesses system again, inquiring the cache database has that the user's is data cached,
If the cache database is stored with the data cached of the user, judge data cached in the cache database
It is whether consistent with the main memory data in primary database,
If consistent, it then be used directly described data cached, if it is inconsistent, by the main memory data described data cached
It is cached again in library.
2. method as described in claim 1, which is characterized in that the data cached and main number in the judgement cache database
According to the data in library whether unanimously include:
By comparing the data cached characteristic value with the main memory data, judge data cached in the cache database
It is whether consistent with the data in primary database.
3. method as described in claim 1, which is characterized in that the method also includes:
The access time of data is recorded according to the cache database, periodically deletes the data not accessed for a long time.
4. method as described in claim 1, which is characterized in that the method also includes:
If described do not exist in the cache database to deposit data, first described the primary database will be stored in deposit data
In, then cache into the cache database.
5. a kind of data processing system based on caching, which is characterized in that the system comprises:
Cache database, for that when user accesses system for the first time, will be stored in deposit data in cache database as caching
Data,
Primary database, for storing main memory data;
Wherein, whether described when user accesses system again, inquiring the cache database has that the user's is data cached, if
The cache database is stored with the data cached of the user, then judges the data cached and main number in the cache database
It is whether consistent according to the main memory data in library, if unanimously, then be used directly described data cached, if it is inconsistent, will be described
Main memory data cache again in the cache database.
6. system as claimed in claim 5, which is characterized in that the cache database includes:
Enquiry module, for data cached in cache database described in user query;
Memory module stores data cached in the cache database for user;
Judgment module, for judging described data cached by comparing the data cached characteristic value with the main memory data
Whether the data in data cached and primary database in library are consistent.
7. system as claimed in claim 5, which is characterized in that the cache database further include:
Removing module periodically deletes the number not accessed for a long time for recording the access time of data according to the cache database
According to.
8. system as claimed in claim 5, which is characterized in that the primary database includes:
Memory module described will be first stored in if not existing in the cache database for described to deposit data to deposit data
The main memory data are used as in the primary database;
Unloading module, then by the main memory data buffer storage into the cache database as described data cached.
9. a kind of computer readable storage medium, the computer storage medium includes computer program, which is characterized in that institute
Stating computer program performs the following operations one or more of computers when being executed by one or more computers:
The operation includes the steps that the data processing method as described in any one of claim 5-8 based on caching includes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810813070.7A CN109325054A (en) | 2018-07-23 | 2018-07-23 | Data processing method, system and storage medium based on caching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810813070.7A CN109325054A (en) | 2018-07-23 | 2018-07-23 | Data processing method, system and storage medium based on caching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109325054A true CN109325054A (en) | 2019-02-12 |
Family
ID=65264074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810813070.7A Pending CN109325054A (en) | 2018-07-23 | 2018-07-23 | Data processing method, system and storage medium based on caching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109325054A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101090401A (en) * | 2007-05-25 | 2007-12-19 | 金蝶软件(中国)有限公司 | Data buffer store method and system at duster environment |
CN101937467A (en) * | 2010-09-17 | 2011-01-05 | 北京开心人信息技术有限公司 | High-efficiency caching method and system of server |
CN103246696A (en) * | 2013-03-21 | 2013-08-14 | 宁波公众信息产业有限公司 | High-concurrency database access method and method applied to multi-server system |
CN103617131A (en) * | 2013-11-26 | 2014-03-05 | 曙光信息产业股份有限公司 | Data caching achieving method |
CN104133783A (en) * | 2014-07-11 | 2014-11-05 | 北京京东尚科信息技术有限公司 | Method and device for processing distributed cache data |
WO2018040167A1 (en) * | 2016-08-31 | 2018-03-08 | 广州市乐商软件科技有限公司 | Data caching method and apparatus |
-
2018
- 2018-07-23 CN CN201810813070.7A patent/CN109325054A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101090401A (en) * | 2007-05-25 | 2007-12-19 | 金蝶软件(中国)有限公司 | Data buffer store method and system at duster environment |
CN101937467A (en) * | 2010-09-17 | 2011-01-05 | 北京开心人信息技术有限公司 | High-efficiency caching method and system of server |
CN103246696A (en) * | 2013-03-21 | 2013-08-14 | 宁波公众信息产业有限公司 | High-concurrency database access method and method applied to multi-server system |
CN103617131A (en) * | 2013-11-26 | 2014-03-05 | 曙光信息产业股份有限公司 | Data caching achieving method |
CN104133783A (en) * | 2014-07-11 | 2014-11-05 | 北京京东尚科信息技术有限公司 | Method and device for processing distributed cache data |
WO2018040167A1 (en) * | 2016-08-31 | 2018-03-08 | 广州市乐商软件科技有限公司 | Data caching method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10599637B2 (en) | Granular buffering of metadata changes for journaling file systems | |
US6807607B1 (en) | Cache memory management system and method | |
US9767140B2 (en) | Deduplicating storage with enhanced frequent-block detection | |
KR102564170B1 (en) | Method and device for storing data object, and computer readable storage medium having a computer program using the same | |
US7853770B2 (en) | Storage system, data relocation method thereof, and recording medium that records data relocation program | |
US10417265B2 (en) | High performance parallel indexing for forensics and electronic discovery | |
US10649903B2 (en) | Modifying provisioned throughput capacity for data stores according to cache performance | |
CN105302840B (en) | A kind of buffer memory management method and equipment | |
CN109344157A (en) | Read and write abruption method, apparatus, computer equipment and storage medium | |
CN106155934B (en) | Caching method based on repeated data under a kind of cloud environment | |
US20200019474A1 (en) | Consistency recovery method for seamless database duplication | |
US20130290636A1 (en) | Managing memory | |
CN103607312A (en) | Data request processing method and system for server system | |
CN109558421A (en) | Data processing method, system, device and storage medium based on caching | |
WO2022062184A1 (en) | High-concurrency query method, intelligent terminal and storage medium | |
US20170262485A1 (en) | Non-transitory computer-readable recording medium, data management device, and data management method | |
CN107506466A (en) | A kind of small documents storage method and system | |
JPH07500441A (en) | Buffer memory management method and computer system for implementing the method | |
CN111913913A (en) | Access request processing method and device | |
CN109325054A (en) | Data processing method, system and storage medium based on caching | |
Mitra et al. | Query-based partitioning of documents and indexes for information lifecycle management | |
CN114443722A (en) | Cache management method and device, storage medium and electronic equipment | |
CN114546962A (en) | Hadoop-based distributed storage system for marine bureau ship inspection big data | |
CN113297106A (en) | Data replacement method based on hybrid storage, related method, device and system | |
Rahman et al. | Optimizing and enhancing performance of database engine using data clustering technique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190212 |
|
RJ01 | Rejection of invention patent application after publication |