CN102479213A - Data buffering method and device - Google Patents

Data buffering method and device Download PDF

Info

Publication number
CN102479213A
CN102479213A CN2010105654738A CN201010565473A CN102479213A CN 102479213 A CN102479213 A CN 102479213A CN 2010105654738 A CN2010105654738 A CN 2010105654738A CN 201010565473 A CN201010565473 A CN 201010565473A CN 102479213 A CN102479213 A CN 102479213A
Authority
CN
China
Prior art keywords
cache entry
key word
hard disk
module
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010105654738A
Other languages
Chinese (zh)
Other versions
CN102479213B (en
Inventor
姜来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Founder Group Co Ltd
Beijing Founder Electronics Co Ltd
Original Assignee
Peking University Founder Group Co Ltd
Beijing Founder Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Founder Group Co Ltd, Beijing Founder Electronics Co Ltd filed Critical Peking University Founder Group Co Ltd
Priority to CN201010565473.8A priority Critical patent/CN102479213B/en
Publication of CN102479213A publication Critical patent/CN102479213A/en
Application granted granted Critical
Publication of CN102479213B publication Critical patent/CN102479213B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a data buffering method which comprises the following steps of: building a buffering item, wherein the buffering item comprises the buffered data and a keyword uniquely identifying the buffering item; storing the buffering item in a buffering module; and reading out the buffering item from the buffering module through searching the keyword by an application program. The invention further provides a data buffering device comprising a building module, a buffering module and a searching module, wherein the building module is used for building the buffering item, wherein the buffering item comprises the buffered data and the keyword uniquely identifying the buffering item; the buffering module is used for accessing the buffering item; and the searching module is used for reading out the buffering item from the buffering module through searching the keyword by the application program. With the method and device provided by the invention, the data access speed of a computer is improved.

Description

Data buffering method and device
Technical field
The present invention relates to the computer information technology field, in particular to data buffering method and device.
Background technology
Application program need frequently read the heap file data from disk sometimes in operational process, this need expend the more time.Utilize buffer technology can improve the response speed of application program significantly.
Fig. 1 shows the synoptic diagram of the data access that does not adopt buffer module.When not using buffer module, all can apply for an internal memory (in the time of Out of Memory, can use the hard disk buffering of operating system acquiescence) when each request msg and return data, after application module 1 used corresponding data, this piece internal memory can be released.When application module 1 sends request of data once more, can repeat aforesaid operations again, influence so very much the response speed of application program, and can take a large amount of internal memories, occur internal memory easily and use nervous situation.
Fig. 2 shows the synoptic diagram of the data access that adopts buffer module.After using buffer module, application module 2 in buffer module, when application module 1 needs data obtains the metadata cache of application module 1 needs and gets final product from buffer module.No matter application module 1 sends the how many times request of data, as long as application is same piece of data, what this piece of data took so is same internal memory.
Along with improving constantly of computer computation ability, the capacity of present buffer module is also done increasingly.Prior art lacks good management method, causes that the access buffered data item will expend the more time in buffer module.
Summary of the invention
The present invention aims to provide a kind of data buffering method and device, to optimize the performance of snubber assembly.
In an embodiment of the present invention, a kind of data buffering method is provided, has comprised: make up cache entry, it comprises the key word that is cushioned data and unique identification cache entry; Cache entry is stored in the buffer module; Application program reads cache entry through search key from buffer module.
In an embodiment of the present invention, a kind of data buffer device is provided, has comprised: make up module, be used to make up cache entry, it comprises the key word that is cushioned data and unique identification cache entry; Buffer module is used for the access cache entry; Retrieval module is used for application program through search key, from buffer module, reads cache entry.
The data buffering method of the embodiment of the invention is managed the data in the buffer module with device because of the employing key word; So overcome the problem that existing buffer system access buffered data item expends the more time, improved the access speed of computer applied algorithm to data.
Description of drawings
Accompanying drawing described herein is used to provide further understanding of the present invention, constitutes the application's a part, and illustrative examples of the present invention and explanation thereof are used to explain the present invention, do not constitute improper qualification of the present invention.In the accompanying drawings:
Fig. 1 shows the synoptic diagram of the data access that does not adopt buffer module;
Fig. 2 shows the synoptic diagram of the data access that adopts buffer module;
Fig. 3 shows the process flow diagram according to the data buffering method of the embodiment of the invention;
Fig. 4 shows the synoptic diagram according to the compound buffer system of correlation technique;
Fig. 5 shows according to the preferred embodiment of the invention cache entry is stored in the process flow diagram in the buffer module;
Fig. 6 shows the process flow diagram that from buffer module, reads cache entry according to the preferred embodiment of the invention;
Fig. 7 shows the synoptic diagram according to the data buffer device of the embodiment of the invention.
Embodiment
Below with reference to accompanying drawing and combine embodiment, specify the present invention.
Fig. 3 shows the process flow diagram according to the data buffering method of the embodiment of the invention, comprising:
Step S10 makes up cache entry, and it comprises the key word that is cushioned data and unique identification cache entry;
Step S20 is stored in cache entry in the buffer module;
Step S30, application program reads cache entry through search key from buffer module.
This data buffering method so overcome the problem that existing buffer system access buffered data item expends the more time, has improved the access speed of computer applied algorithm to data because adopt key word to manage the data in the buffer module.
Preferably, making up key word comprises: the attribute of the data that are cushioned is constituted character string; The computing of character string being carried out unique mapping obtains key word.The computing that this preferred embodiment adopts unique mapping for example hash algorithm etc. with the best property of attribute mapping of data to key word, thereby can the unique identification cache entry, this makes that utilizing key word in buffer module, to retrieve cache entry is achieved.
Preferably, attribute comprises the last modification time of data that is cushioned.Attribute can comprise title, type of buffered data etc., in this preferred embodiment, in attribute, comprises last modification time, when this can guarantee that application program is utilized this key search cache entry, can retrieve up-to-date data.
Preferably, this method also comprises: key word is provided with priority, wherein, the priority of the key word of the key word of the cache entry that newly makes up and the current cache entry that is read is set to the highest, one-level all reduced in the key word of other cache entry.Key word is provided with priority can be so that buffer module realizes various priority managements for the access of cache entry; This priority is if according to generation priority, the frequency of access of data, surge time is provided with; This just makes that buffer module can be according to the generation priority of data, frequency of access, these cache entry of surge time access; Thereby improve the hit rate of application access, further improved the speed of computing machine.
Preferably, one-level all reduced in the key word of other cache entry comprise: the key word of all cache entry is made up formation, and key word is successively arranged in formation according to its priority order from high to low; The key word of the cache entry that newly makes up and the key word of the cache entry that is read are inserted into the head of the queue of formation.This data structure is simple, is easy to realize.For example, can come administration queue with tabulation.Inserting head of the queue, is exactly to increase by one in the tabulation gauge outfit, and key word is kept at this newly-built first, and other are with regard to natural decline one-level.Can also wait with the chain type storage organization and realize this formation.
In the prior art, the costing an arm and a leg of internal memory, capacity receives certain limitation usually.Present most of operating system provides the hard disk buffer technology, can alleviate internal memory and use nervous situation.Fig. 4 shows the synoptic diagram according to the compound buffer system of correlation technique.Mainly comprise AIM, memory buffer module and hard disk buffer module in the system.This system uses the memory buffer module to preserve the frequency of utilization higher data, uses the hard disk buffer module to preserve the lower data of frequency of utilization.This buffer system has superiority with memory buffer than single on buffer capacity, has superiority with the hard disk buffering than single reading on the average velocity of cache entry.
The preferred embodiments of the present invention can be implemented in this L2 cache system.Preferably, buffer module comprises memory buffer and hard disk buffering, and step S20 comprises: the cache entry that will newly make up is stored in the memory buffer; If the use of memory buffer reaches first threshold, the cache entry that the memory buffer medium priority is minimum copies (promptly this cache entry still keeps) in the hard disk buffering in memory buffer; If the use of memory buffer reaches second threshold value, the cache entry that the memory buffer medium priority is minimum is transferred to (promptly this cache entry is deleted) in the hard disk buffering in memory buffer; Wherein, according to the capacity setting first threshold and second threshold value of memory buffer, second threshold value is greater than first threshold in advance.First threshold can be a critical capacity, and second threshold value can be to overflow capacity.The cache entry that newly makes up is stored in the memory buffer, and the cache entry that priority is lower shifts out memory buffer, this make the cache entry of preserving in the memory buffer be priority the highest with higher cache entry, thereby improved the hit rate of data access.
Fig. 5 shows according to the preferred embodiment of the invention cache entry is stored in the process flow diagram in the buffer module, may further comprise the steps:
Step 502 makes up cache entry, comprising being cushioned data;
Step 504, according to the information architecture key word of buffered data, key word is corresponding one by one with cache entry;
Step 506 judges that cache entry deposits the memory buffer module in, and whether memory buffer reaches overflows capacity;
Step 508, if, then begin to clear up memory buffer, promptly write the hard disk buffering to the minimum cache entry of the priority of key word, remove this cache entry in the memory buffer then;
Step 510 is not overflowed capacity if reach, and judges further then whether memory buffer reaches critical capacity;
Step 512 is if then the cache entry with lowest priority in the memory buffer copies in the hard disk buffering;
Step 514 deposits this cache entry in memory buffer.
Preferably, step S20 also comprises: if a cache entry in the hard disk buffering is read, then the cache entry that is read is copied in the internal memory, and the priority of this cache entry is set to the highest; If the use of hard disk buffering reaches the 3rd threshold value, hard disk is cushioned the minimum cache entry deletion of medium priority, and from the formation of priority, delete its key word; Wherein, capacity setting the 3rd threshold value that cushions according to hard disk in advance.
Fig. 6 shows the process flow diagram that from buffer module, reads cache entry according to the preferred embodiment of the invention, may further comprise the steps:
Step 602 is at first according to the information architecture key word that will read;
Step 604 is searched the cache entry that this key word is arranged earlier in memory buffer, if find, then jump to step 612;
Step 606 if do not find, is then searched the cache entry that this key word is arranged in the hard disk buffering;
Step 608 if do not find, does not then have key word corresponding buffered item in the buffer system;
Step 610 if find, is then read in this cache entry in the memory buffer;
Step 612 is adjusted this cache entry position to the first in priority list;
Step 614 is returned this cache entry and is given application program.
Adopt above preferred embodiment, can the data that high-frequency is used be kept in the memory buffer, data not too commonly used are kept in the hard disk buffering, this has realized the mutual supplement with each other's advantages that memory buffer and hard disk cushion, and has further improved the performance of computing machine.
Fig. 7 shows the synoptic diagram according to the data buffer device of the embodiment of the invention, comprising:
Make up module 10, be used to make up cache entry, it comprises the key word that is cushioned data and unique identification cache entry;
Buffer module 20 is used for the access cache entry;
Retrieval module 30 is used for application program through search key, from buffer module, reads cache entry.
This data buffer device has improved the data access speed of computing machine.
Preferably, make up module 10 and comprise: attribute module, be used for the attribute of the data that are cushioned is constituted character string, attribute comprises the last modification time of the data that are cushioned; Computing module, the computing that is used for character string is carried out unique mapping obtains key word.This makes that utilizing key word in buffer module, to retrieve cache entry is achieved.The attribute that makes up character string can also comprise the title of data, type etc.
Preferably, this data buffer device also comprises: priority block, be used for key word is provided with priority, and wherein, with the key word structure formation of all cache entry, key word is successively arranged in formation according to its priority order from high to low; The key word of the cache entry that newly makes up and the key word of the cache entry that is read are inserted into the head of the queue of formation.The priority of the key word of this feasible cache entry that will newly make up and the key word of the current cache entry that is read is set to the highest, and one-level all reduced in the key word of other cache entry.This has further improved the speed of computing machine.
Preferably, buffer module 20 comprises memory buffer and hard disk buffering, the cache entry that newly makes up is stored in states in the memory buffer; If the use of memory buffer reaches first threshold, the cache entry that the memory buffer medium priority is minimum copies in the hard disk buffering; If the use of memory buffer reaches second threshold value, the cache entry that the memory buffer medium priority is minimum is transferred in the hard disk buffering; If a cache entry in the hard disk buffering is read, then the cache entry that is read is copied in the internal memory; If the use of hard disk buffering reaches the 3rd threshold value, hard disk is cushioned the minimum cache entry deletion of medium priority, and from the formation of key word priority, delete its key word; Wherein, the first threshold and second threshold value be in advance according to the capacity setting of memory buffer, and second threshold value is greater than first threshold, and the 3rd threshold value is in advance according to the capacity setting of hard disk buffering.This has realized the mutual supplement with each other's advantages that memory buffer and hard disk cushion, and has further improved the performance of computing machine.
Can find out that from above description the present invention has improved the performance of computing machine.
Obviously; It is apparent to those skilled in the art that above-mentioned each module of the present invention or each step can realize that they can concentrate on the single calculation element with the general calculation device; Perhaps be distributed on the network that a plurality of calculation element forms; Alternatively, they can be realized with the executable program code of calculation element, carried out by calculation element thereby can they be stored in the memory storage; Perhaps they are made into each integrated circuit modules respectively, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is merely the preferred embodiments of the present invention, is not limited to the present invention, and for a person skilled in the art, the present invention can have various changes and variation.All within spirit of the present invention and principle, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (11)

1. a data buffering method is characterized in that, comprising:
Make up cache entry, it comprises the key word that is cushioned data and the said cache entry of unique identification;
Said cache entry is stored in the buffer module;
Application program reads said cache entry through the said key word of retrieval from said buffer module.
2. method according to claim 1 is characterized in that, makes up said key word and comprises:
The attribute of the said data that are cushioned is constituted character string;
The computing of said character string being carried out unique mapping obtains said key word.
3. method according to claim 2 is characterized in that, said attribute comprises the last modification time of the said data that are cushioned.
4. method according to claim 1; It is characterized in that; Also comprise: said key word is provided with priority; Wherein, the priority of the key word of the key word of the said cache entry that newly makes up and the current said cache entry that is read is set to the highest, one-level all reduced in the key word of other said cache entry.
5. method according to claim 4 is characterized in that, one-level is all reduced in the key word of other said cache entry comprise:
The key word of all said cache entry is made up formation, and said key word is successively arranged in said formation according to its priority order from high to low;
The key word of the key word of the said cache entry that newly makes up and the said cache entry that is read is inserted into the head of the queue of said formation.
6. method according to claim 5 is characterized in that, said buffer module comprises memory buffer and hard disk buffering, said cache entry is stored in the buffer module comprise:
The said cache entry that newly makes up is stored in the said memory buffer;
If the use of said memory buffer reaches first threshold, the minimum cache entry of priority described in the said memory buffer is copied in the said hard disk buffering;
If the use of said memory buffer reaches second threshold value, the minimum cache entry of priority described in the said memory buffer is transferred in the said hard disk buffering;
Wherein, according to said first threshold of the capacity setting of said memory buffer and said second threshold value, said second threshold value is greater than said first threshold in advance.
7. method according to claim 6 is characterized in that, said cache entry is stored in the buffer module also comprise:
If a said cache entry in the said hard disk buffering is read, then the said cache entry that is read is copied in the said internal memory;
If the use of said hard disk buffering reaches the 3rd threshold value,, and from said formation, delete its key word with the minimum cache entry deletion of priority described in the said hard disk buffering;
Wherein, said the 3rd threshold value of capacity setting that cushions according to said hard disk in advance.
8. a data buffer device is characterized in that, comprising:
Make up cache entry, it comprises the key word that is cushioned data and the said cache entry of unique identification;
Buffer module is used for the said cache entry of access;
Retrieval module is used for application program through the said key word of retrieval, from said buffer module, reads said cache entry.
9. device according to claim 8 is characterized in that, said structure module comprises: attribute module, be used for the attribute of the said data that are cushioned is constituted character string,
Said attribute comprises the last modification time of the said data that are cushioned;
Computing module, the computing that is used for said character string is carried out unique mapping obtains said key word.
10. device according to claim 8; It is characterized in that; Also comprise: priority block is used for said key word is provided with priority, wherein; The key word of all said cache entry is made up formation, and said key word is successively arranged in said formation according to its priority order from high to low; The key word of the key word of the said cache entry that newly makes up and the said cache entry that is read is inserted into the head of the queue of said formation.
11. device according to claim 10 is characterized in that, said buffer module comprises memory buffer and hard disk buffering,
The said cache entry that newly makes up is stored in the said memory buffer;
If the use of said memory buffer reaches first threshold, the minimum cache entry of priority described in the said memory buffer is copied in the said hard disk buffering;
If the use of said memory buffer reaches second threshold value, the minimum cache entry of priority described in the said memory buffer is transferred in the said hard disk buffering;
If a said cache entry in the said hard disk buffering is read, then the said cache entry that is read is copied in the said internal memory;
If the use of said hard disk buffering reaches the 3rd threshold value,, and from said formation, delete its key word with the minimum cache entry deletion of priority described in the said hard disk buffering;
Wherein, said first threshold and said second threshold value be in advance according to the capacity setting of said memory buffer, and said second threshold value is greater than said first threshold, and said the 3rd threshold value is in advance according to the capacity setting of said hard disk buffering.
CN201010565473.8A 2010-11-25 2010-11-25 Data buffering method and device Expired - Fee Related CN102479213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010565473.8A CN102479213B (en) 2010-11-25 2010-11-25 Data buffering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010565473.8A CN102479213B (en) 2010-11-25 2010-11-25 Data buffering method and device

Publications (2)

Publication Number Publication Date
CN102479213A true CN102479213A (en) 2012-05-30
CN102479213B CN102479213B (en) 2014-07-30

Family

ID=46091861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010565473.8A Expired - Fee Related CN102479213B (en) 2010-11-25 2010-11-25 Data buffering method and device

Country Status (1)

Country Link
CN (1) CN102479213B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073520A (en) * 2016-11-10 2018-05-25 腾讯科技(深圳)有限公司 A kind of internal memory control method and device
CN108932163A (en) * 2018-06-15 2018-12-04 奇酷互联网络科技(深圳)有限公司 EMS memory management process, device, readable storage medium storing program for executing and terminal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774703A (en) * 2002-01-18 2006-05-17 伊迪蒂克公司 Multi-tiered caching mechanism for the storage and retrieval of content multiple versions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774703A (en) * 2002-01-18 2006-05-17 伊迪蒂克公司 Multi-tiered caching mechanism for the storage and retrieval of content multiple versions

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108073520A (en) * 2016-11-10 2018-05-25 腾讯科技(深圳)有限公司 A kind of internal memory control method and device
CN108073520B (en) * 2016-11-10 2021-09-14 腾讯科技(深圳)有限公司 Memory control method and device
CN108932163A (en) * 2018-06-15 2018-12-04 奇酷互联网络科技(深圳)有限公司 EMS memory management process, device, readable storage medium storing program for executing and terminal

Also Published As

Publication number Publication date
CN102479213B (en) 2014-07-30

Similar Documents

Publication Publication Date Title
JP5996088B2 (en) Cryptographic hash database
US9672235B2 (en) Method and system for dynamically partitioning very large database indices on write-once tables
US9449005B2 (en) Metadata storage system and management method for cluster file system
KR101023883B1 (en) Storage system using high speed storage divece as cache
CN110188108B (en) Data storage method, device, system, computer equipment and storage medium
CN103246696A (en) High-concurrency database access method and method applied to multi-server system
CN105117415A (en) Optimized SSD data updating method
WO2009033419A1 (en) A data caching processing method, system and data caching device
KR20190134115A (en) Method and apparatus for providing efficient indexing and computer program included in computer readable medium therefor
CN102521330A (en) Mirror distributed storage method under desktop virtual environment
CN103678172A (en) Local data cache management method and device
Lee et al. An efficient index buffer management scheme for implementing a B-tree on NAND flash memory
Sarwat et al. FAST: a generic framework for flash-aware spatial trees
Sarwat et al. Generic and efficient framework for search trees on flash memory storage systems
CN103309815A (en) Method and system for increasing available capacity and service life of solid state disc
US20190294590A1 (en) Region-integrated data deduplication implementing a multi-lifetime duplicate finder
CN106055679A (en) Multi-level cache sensitive indexing method
Ahn et al. μ*-Tree: An ordered index structure for NAND flash memory with adaptive page layout scheme
CN106547484A (en) It is a kind of that internal storage data reliability method and system realized based on RAID5
CN111241090B (en) Method and device for managing data index in storage system
CN102479213B (en) Data buffering method and device
Wang et al. Block-Based Multi-Version B $^+ $-Tree for Flash-Based Embedded Database Systems
US9842061B2 (en) Implementing advanced caching
US10209909B1 (en) Storage element cloning in presence of data storage pre-mapper
Xu et al. Update migration: An efficient B+ tree for flash storage

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140730

Termination date: 20191125

CF01 Termination of patent right due to non-payment of annual fee