CN106897030A - A kind of data cached management method and device - Google Patents

A kind of data cached management method and device Download PDF

Info

Publication number
CN106897030A
CN106897030A CN201710112622.7A CN201710112622A CN106897030A CN 106897030 A CN106897030 A CN 106897030A CN 201710112622 A CN201710112622 A CN 201710112622A CN 106897030 A CN106897030 A CN 106897030A
Authority
CN
China
Prior art keywords
data
information
queue
notebook
safeguards
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710112622.7A
Other languages
Chinese (zh)
Inventor
刘如意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201710112622.7A priority Critical patent/CN106897030A/en
Publication of CN106897030A publication Critical patent/CN106897030A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • G06F3/0649Lifecycle management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of data cached management method and device, method includes:Search whether there is hiting data from data cached according to read request;If so, when the hit-count of notebook data reaches first threshold, the index information of notebook data is safeguarded into the index information of upper grade maintenance of information queue tail, to upper grade maintenance of information queue, is reduced to current information and safeguards queue by queue lifting from current information;If not, when the miss number of times of notebook data reaches Second Threshold, by the index information of notebook data be inserted into from each class information safeguard queue in during the target information chosen safeguards queue, if and current cache data have expired, the corresponding data of the index information in the index information of the lowest class maintenance of information queue tail and caching are deleted.The data cached management method of the present invention and device, can reasonably protect hot spot data and eliminate non-thermal point data, can improve data cached hit rate, lift caching system efficiency.

Description

A kind of data cached management method and device
Technical field
The present invention relates to technical field of memory, more particularly to a kind of data cached management method and device.
Background technology
Within the storage system, the read or write speed of mechanical hard disk is well below internal memory and CPU processing speeds, and development is very Slowly, as the bottleneck of whole storage system development.To overcome this problem, solid state hard disc meets the tendency of generation, solid state hard disc (Solid State Drive, SSD) is the hard disk being made up of solid-state electronic storage chip array, can compared with mechanical hard disk Greatly improve reading and writing data speed.But, current solid state hard disc involves great expense, and still has compared to mechanical hard disk with regard to unit capacity cost There is greater advantage, thus take into account IOPS high and massive store demand, existing storage system uses a kind of half-way house:With solid-state Hard disk is cached.
In storage system operation, constantly there are data to be buffered, can be gradually occupied full as the solid state hard disc of buffer area, because How this replaces away lower grade old data block from buffer area, to cache new data block, to ensure whole caching system The hit rate of system digital independent, just turns into an Important Problems of caching system.
In the prior art, the caching replacement method for being used is to safeguard a queue, and current caching is maintained in queue Data message, when new data enters to be cached, can be put into queue tail;And when needing to eliminate legacy data, can be from queue head Eliminated;When data cached hit, hiting data can be replaced to queue tail from current location, that is, be the most recently used Data can be protected, and be not used by always and time data most long can be eliminated in the buffer, is so reached and is eliminated nearest The minimum data cached purpose for using.
But existing this caching replacement method, for sporadic data manipulation or periodic batch operation, can be by The displacement of these data may cause to displace hot spot data to queue tail, cause hit rate to decline, and can influence caching system Efficiency.
The content of the invention
It is an object of the invention to provide a kind of data cached management method and device, can reasonably protect hot spot data with And non-thermal point data is eliminated, and data cached hit rate can be improved, lift caching system efficiency.
To achieve the above object, the present invention provides following technical scheme:
A kind of data cached management method, including:
Search whether there is hiting data from data cached according to read request;
If so, the hit-count of statistics notebook data, when the hit-count of notebook data reaches first threshold, by notebook data Index information safeguards queue lifting to upper grade maintenance of information queue from current information, by the upper grade maintenance of information team The index information of row afterbody is reduced to current information and safeguards queue, and each grade described information safeguards data indexing information in queue It is arranged in order from high to low according to temperature is accessed;
If it is not, the miss number of times of statistics notebook data, when the miss number of times of notebook data reaches Second Threshold, this is counted According to index information be inserted into from each grade described information safeguard queue in during the target information chosen safeguards queue, and if working as It is preceding it is data cached expired, then by the index information of the lowest class maintenance of information queue tail and caching in the index information correspondence Data delete.
Alternatively, also include:When the hit-count of notebook data is not up to first threshold, the index information of notebook data is carried It is raised to current information and safeguards queue head.
Alternatively, also include:Periodically balance each grade described information and safeguard the data indexing information number that queue is included Amount.
Alternatively, it is described periodically to balance each grade described information and safeguard the data indexing information quantity bag that queue is included Include:
Queue is safeguarded for each grade described information, safeguards that queue tail deletes index information from described information, and will The corresponding data of the index information are deleted in caching.
Alternatively, also include:Queue is safeguarded for each described information, when data no hit in Preset Time, Notebook data is reduced in lower level maintenance of information queue in the index information during described information safeguards queue.
A kind of data cached managing device, including:
Searching modul, for searching whether there is hiting data from data cached according to read request;
Hiting data management module, for when there is hiting data in data cached, counting the hit-count of notebook data, When the hit-count of notebook data reaches first threshold, safeguard queue lifting to upper from current information the index information of notebook data One grade maintenance of information queue, is reduced to the index information of a upper grade maintenance of information queue tail current information and safeguards Queue, data indexing information is arranged in order from high to low according to temperature is accessed during each grade described information safeguards queue;
Miss data management module, for when not existing hiting data in data cached, counting not ordering for notebook data Middle number of times, when the miss number of times of notebook data reaches Second Threshold, the index information of notebook data is inserted into from each grade institute State during the target information chosen in maintenance of information queue safeguards queue, and if current cache data expired, by the lowest class The corresponding data of the index information are deleted in the index information and caching of maintenance of information queue tail.
Alternatively, the hiting data management module is additionally operable to when the hit-count not up to first threshold of notebook data, The index information lifting of notebook data is safeguarded into queue head to current information.
Alternatively, also safeguard that queue is included for periodically balancing each grade described information including queue management module Data indexing information quantity.
Alternatively, the queue management module is for each grade described information specifically for safeguarding queue, from the letter Breath safeguards that queue tail deletes index information, and the corresponding data of the index information in caching are deleted.
Alternatively, the miss data management module is additionally operable to safeguard queue for each described information, when data exist When in Preset Time without hit, index information of the notebook data in described information safeguards queue is reduced to lower level information In safeguarding queue.
As shown from the above technical solution, data cached management method provided by the present invention and device, when read request arrives When, search whether there is hiting data from data cached according to read request;If there is hiting data in caching, and notebook data When hit-count reaches first threshold, the index information of notebook data is safeguarded into queue lifting to a upper class information from current information Queue is safeguarded, the index information of the upper grade maintenance of information queue tail is reduced into current information safeguards queue, each Grade described information safeguards that the index information of data in queue is arranged in order from high to low according to temperature is accessed;If not deposited in caching In hiting data, and when the miss number of times of notebook data reaches Second Threshold, by the index information of notebook data be inserted into from During each grade described information safeguards that the target information chosen in queue safeguards queue, and if current cache data expire, general The corresponding data of the index information are deleted in the index information and caching of the lowest class maintenance of information queue tail.
The data cached management method of the present invention and device, the index information based on data in caching set up many class information dimensions Shield queue, to it is data cached enter line replacement management, for sporadic data manipulation or periodic batch operation, due to Data hit number of times will not be sharply increased in these operations, and these data messages will not rise to very high-grade maintenance of information In queue, therefore in data cached displacement management, can avoid displacing hot spot data to a certain extent, caching can be improved Data hit rate, lifts caching system efficiency.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic diagram of data cached management method provided in an embodiment of the present invention;
Fig. 2 safeguards the schematic diagram of queue for many class informations set up in present invention method;
Fig. 3 is a kind of schematic diagram of data cached managing device provided in an embodiment of the present invention.
Specific embodiment
In order that those skilled in the art more fully understand the technical scheme in the present invention, below in conjunction with of the invention real The accompanying drawing in example is applied, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described implementation Example is only a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this area is common The every other embodiment that technical staff is obtained under the premise of creative work is not made, should all belong to protection of the present invention Scope.
Fig. 1 is refer to, the embodiment of the present invention provides a kind of data cached method of replacing, including:
S10:Search whether there is hiting data from data cached according to read request;If so, S11 is then gone to step, if it is not, then Go to step S12;
S11:The hit-count of notebook data is counted, when the hit-count of notebook data reaches first threshold, by notebook data Index information safeguards queue lifting to upper grade maintenance of information queue from current information, by the upper grade maintenance of information team The index information of row afterbody is reduced to current information and safeguards queue, and each grade described information safeguards data indexing information in queue It is arranged in order from high to low according to temperature is accessed;
S12:The miss number of times of notebook data is counted, when the miss number of times of notebook data reaches Second Threshold, this is counted According to index information be inserted into from each grade described information safeguard queue in during the target information chosen safeguards queue, and if working as It is preceding it is data cached expired, then by the index information of the lowest class maintenance of information queue tail and caching in the index information correspondence Data delete.
The data cached management method of the present embodiment, data are managed in caching system with the cache blocks of fixed size, are based on The index information of cache blocks data creates many class informations and safeguards queue, and each maintenance of information queue has access temperature etc. successively Level.In each maintenance of information queue, the index information of each data is arranged in order from high to low according to temperature is accessed.
When read request arrives, search whether there is hiting data from data cached according to read request;If existing in caching Hiting data, and the hit-count of notebook data is when reaching first threshold, and the index information of notebook data is safeguarded from current information Queue is lifted to upper grade maintenance of information queue, and the index information of the upper grade maintenance of information queue tail is reduced to Current information safeguards queue, and each grade described information safeguards the index information of data in queue according to accessing temperature from high to low It is arranged in order;If not existing hiting data in caching, and when the miss number of times of notebook data reaches Second Threshold, this is counted According to index information be inserted into from each grade described information safeguard queue in during the target information chosen safeguards queue, and if working as It is preceding it is data cached expired, then by the index information of the lowest class maintenance of information queue tail and caching in the index information correspondence Data delete.
The data cached management method of the present embodiment, the index information based on data in caching sets up many class informations and safeguards team Row, to it is data cached enter line replacement management, for sporadic data manipulation or periodic batch operation, due at these Data hit number of times will not be sharply increased in operation, and these data messages will not rise to very high-grade maintenance of information queue In, therefore in data cached displacement management, can avoid displacing hot spot data to a certain extent, can improve data cached Hit rate, lifts caching system efficiency.
The data cached management method of the present embodiment is described further below.
The data cached management method of the present embodiment, manages data, accordingly in caching system with the cache blocks of fixed size , based at least two-stage maintenance of information queue of the data creation in caching, caching in buffer area is maintained in maintenance of information queue The information of block number evidence, each maintenance of information queue has access Heat range successively.Refer to Fig. 2, maintenance of information queue Q0, Q1 ..., the access Heat range of Qk raises successively.In each maintenance of information queue, each data block information is according to access temperature It is arranged in order from high to low.
The data cached management method of the present embodiment includes step:
S10:Search whether there is hiting data from data cached according to read request;If so, S11 is then gone to step, if it is not, then Go to step S12;
S11:The hit-count of notebook data is counted, when the hit-count of notebook data reaches first threshold, by notebook data Index information safeguards queue lifting to upper grade maintenance of information queue from current information, by the upper grade maintenance of information team The index information of row afterbody is reduced to current information and safeguards queue, and each grade described information safeguards data indexing information in queue It is arranged in order from high to low according to temperature is accessed.
For example, there is hiting data in data cached, and the hit-count of notebook data reaches first threshold, current notebook data Index information be located at queue Q1 the 4th, then the index information of notebook data is lifted in queue Q2 from queue Q1, optionally, Specifically can be by the head of the index information of notebook data lifting to queue Q2.Meanwhile, the index information of queue Q2 afterbodys is reduced to In queue Q1, any position of queue Q1 can be specifically reduced to, such as can be reduced to queue Q1 afterbodys.So, to life The position of middle data point reuse its data message in multi-queue, reaches the purpose of protection hot spot data.
Further, also include:When the hit-count of notebook data is not up to first threshold, by the index information of notebook data Lift current information and safeguard queue head.
If there is hiting data in data cached, and the hit-count of notebook data is not up to first threshold, then do not adjust this Maintenance of information queue residing for the index information of data, and position of the index information of notebook data in current queue is only adjusted, By the index information lifting of notebook data to the head for being presently in maintenance of information queue.
S12:The miss number of times of notebook data is counted, when the miss number of times of notebook data reaches Second Threshold, this is counted According to index information be inserted into from each grade described information safeguard queue in during the target information chosen safeguards queue, and if working as It is preceding it is data cached expired, then by the index information of the lowest class maintenance of information queue tail and caching in the index information correspondence Data delete.
If do not exist hiting data in data cached, but the miss number of times of notebook data has reached Second Threshold, shows this Data reach access temperature requirement, then the index information of notebook data can be inserted into maintenance of information queue.Specifically, can be with A maintenance of information queue is chosen from each maintenance of information queue, queue is safeguarded as target information, by the index information of notebook data It is inserted into during the target information safeguards queue.
Optionally, the index information of notebook data can be inserted into during minimum primary information safeguards queue Q0, specifically can be with It is inserted into the head that minimum primary information safeguards queue Q0.
If there is space in buffering area, notebook data can be directly appended in buffer area.
To add new data in the buffer, and current cache data have expired, then need to eliminate legacy data, can be from most Low primary information is eliminated in safeguarding queue according to least recently used principle.The index information of data from maintenance of information will be eliminated Deleted in queue, while corresponding data is removed from the cache.
In the present embodiment, each maintenance of information queue manages data according to least recently used principle, i.e., by minimum recently Using eliminating data.When data are eliminated in each queue, data are removed from the cache, data directory is added into queue Qh (Q- History) head;If data are accessed again in Qh, its priority is recalculated, moves on to the head of object queue, Object queue refers to the maintenance of information queue selected from each maintenance of information queue.In addition, in maintenance of information queue Qh, being also The index of data is eliminated according to least recently used principle.
The data cached management method of the present embodiment, also includes:Periodically balance each grade described information and safeguard queue bag The data indexing information quantity for containing.The data indexing information that queue is included is safeguarded by periodically balancing each grade described information Quantity, makes each class information safeguard that the data indexing information quantity included in queue is substantially suitable, reaches balance queues at different levels Purpose, optimizes to data cached management.
Periodically balance each grade described information and safeguard the data indexing information quantity bag that queue is included specifically, described Include:Queue is safeguarded for each grade described information, safeguards that queue tail deletes index information from described information, and by caching The corresponding data of the index information are deleted.
When safeguarding that queue is balanced adjustment to each class information, each class information safeguards queue according to least recently used Principle manages data, i.e., eliminate data by least recently used.The data cached management method of the present embodiment, also includes:For Each described information safeguards queue, when data no hit in Preset Time, notebook data is safeguarded into queue in described information In index information be reduced in lower level maintenance of information queue.
In order to prevent access level data high to be never eliminated, when data are no accessed in Preset Time, need Priority is reduced, the index information of notebook data is deleted from current queue, be added to lower level maintenance of information queue head.
Therefore, the data cached management method of the present embodiment, the index information based on data in caching sets up many class informations Safeguard queue, to it is data cached enter line replacement management, can be reasonably by hot spot data lifting to senior maintenance of information queue In, hot spot data is protected, prevent hot spot data to be replaced;Simultaneously when buffer memory device has been expired, lower grade data can be selected Enter line replacement, reach cache hit rate higher.
Accordingly, Fig. 3 is refer to, the embodiment of the present invention also provides a kind of data cached managing device, including:
Searching modul 20, for searching whether there is hiting data from data cached according to read request;
Hiting data management module 21, the hit time for when there is hiting data in data cached, counting notebook data Number, when the hit-count of notebook data reaches first threshold, safeguards that queue is lifted by the index information of notebook data from current information To upper grade maintenance of information queue, the index information of the upper grade maintenance of information queue tail is reduced to current information Safeguard queue, data indexing information is arranged in order from high to low according to temperature is accessed during each grade described information safeguards queue;
Miss data management module 22, for when not existing hiting data in data cached, statistics notebook data to be not Hit-count, when the miss number of times of notebook data reaches Second Threshold, the index information of notebook data is inserted into from each grade During described information safeguards that the target information chosen in queue safeguards queue, and if current cache data expired, will be most low The corresponding data of the index information are deleted in the index information and caching of level maintenance of information queue tail.
As can be seen that the data cached managing device of the present embodiment, when read request arrives, according to read request from data cached Search whether there is hiting data;If there is hiting data in caching, and the hit-count of notebook data is when reaching first threshold, The index information of notebook data is safeguarded into queue lifting to upper grade maintenance of information queue from current information, by a upper grade The index information of maintenance of information queue tail is reduced to current information and safeguards queue, and each grade described information safeguards number in queue According to index information according to access temperature be arranged in order from high to low;If not existing hiting data in caching, and work as notebook data Miss number of times when reaching Second Threshold, the index information of notebook data is inserted into safeguarding queue from each grade described information The target information of selection safeguarded in queue, and if current cache data expired, by the lowest class maintenance of information queue tail Index information and caching in the corresponding data of the index information delete.
The data cached managing device of the present embodiment, the index information based on data in caching sets up many class informations and safeguards team Row, to it is data cached enter line replacement management, for sporadic data manipulation or periodic batch operation, due at these Data hit number of times will not be sharply increased in operation, and these data messages will not rise to very high-grade maintenance of information queue In, therefore in data cached displacement management, can avoid displacing hot spot data to a certain extent, can improve data cached Hit rate, lifts caching system efficiency.
The data cached managing device of the present embodiment, the hiting data management module 21 is additionally operable to:When the hit of notebook data When number of times is not up to first threshold, the index information lifting of notebook data is safeguarded into queue head to current information.
The data cached managing device of the present embodiment, also including queue management module, for periodically balancing each grade institute State the data indexing information quantity that maintenance of information queue is included.
Specifically, the queue management module for each grade described information specifically for safeguarding queue, from the letter Breath safeguards that queue tail deletes index information, and the corresponding data of the index information in caching are deleted.Each class information is safeguarded Queue manages data according to least recently used principle, i.e., each class information safeguards queue by least recently used to eliminate number According to.
The data cached managing device of the present embodiment, the miss data management module 22 is additionally operable to, described for each Maintenance of information queue, when data no hit in Preset Time, by index of the notebook data in described information safeguards queue Information is reduced in lower level maintenance of information queue.
A kind of data cached management method provided by the present invention and device are described in detail above.Herein should Principle of the invention and implementation method are set forth with specific case, the explanation of above example is only intended to help and manages The solution method of the present invention and its core concept.It should be pointed out that for those skilled in the art, not departing from On the premise of the principle of the invention, some improvement and modification can also be carried out to the present invention, these are improved and modification also falls into this hair In bright scope of the claims.

Claims (10)

1. a kind of data cached management method, it is characterised in that including:
Search whether there is hiting data from data cached according to read request;
If so, the hit-count of statistics notebook data, when the hit-count of notebook data reaches first threshold, by the index of notebook data Information safeguards queue lifting to upper grade maintenance of information queue from current information, by the upper grade maintenance of information rear of queue The index information in portion is reduced to current information and safeguards queue, each grade described information safeguard in queue data indexing information according to Temperature is accessed to be arranged in order from high to low;
If it is not, the miss number of times of statistics notebook data, when the miss number of times of notebook data reaches Second Threshold, by notebook data Index information be inserted into from each grade described information safeguard queue in during the target information chosen safeguards queue, and if current slow Deposit data has been expired, then by the corresponding number of the index information in the index information of the lowest class maintenance of information queue tail and caching According to deletion.
2. data cached management method according to claim 1, it is characterised in that also include:When the hit time of notebook data When number is not up to first threshold, the index information lifting of notebook data is safeguarded into queue head to current information.
3. data cached management method according to claim 1, it is characterised in that also include:Periodically balance each etc. Level described information safeguards the data indexing information quantity that queue is included.
4. data cached management method according to claim 3, it is characterised in that described periodically to balance each grade institute Stating the data indexing information quantity that maintenance of information queue includes includes:
Queue is safeguarded for each grade described information, safeguards that queue tail deletes index information from described information, and will caching In the corresponding data of the index information delete.
5. data cached management method according to claim 1, it is characterised in that also include:For each described information Queue is safeguarded, when data no hit in Preset Time, by index information of the notebook data in described information safeguards queue It is reduced in lower level maintenance of information queue.
6. a kind of data cached managing device, it is characterised in that including:
Searching modul, for searching whether there is hiting data from data cached according to read request;
Hiting data management module, for when there is hiting data in data cached, counting the hit-count of notebook data, when this When the hit-count of data reaches first threshold, the index information of notebook data is safeguarded into queue lifting to upper first-class from current information Level maintenance of information queue, is reduced to the index information of a upper grade maintenance of information queue tail current information and safeguards team Row, data indexing information is arranged in order from high to low according to temperature is accessed during each grade described information safeguards queue;
Miss data management module, miss time for when not existing hiting data in data cached, counting notebook data Number, when the miss number of times of notebook data reaches Second Threshold, the index information of notebook data is inserted into from described in each grade and is believed During breath safeguards that the target information chosen in queue safeguards queue, and if current cache data expired, by the lowest class information The corresponding data of the index information are deleted in safeguarding the index information and caching of queue tail.
7. data cached managing device according to claim 6, it is characterised in that the hiting data management module is also used In when the hit-count of notebook data is not up to first threshold, the index information lifting of notebook data is safeguarded into queue to current information Head.
8. data cached managing device according to claim 6, it is characterised in that also including queue management module, be used for Periodically balance each grade described information and safeguard the data indexing information quantity that queue is included.
9. data cached managing device according to claim 8, it is characterised in that the queue management module specifically for Queue is safeguarded for each grade described information, safeguards that queue tail deletes index information from described information, and will be somebody's turn to do in caching The corresponding data of index information are deleted.
10. data cached managing device according to claim 6, it is characterised in that the miss data management module It is additionally operable to safeguard queue for each described information, when data no hit in Preset Time, by notebook data in the letter Breath safeguards that the index information in queue is reduced in lower level maintenance of information queue.
CN201710112622.7A 2017-02-28 2017-02-28 A kind of data cached management method and device Pending CN106897030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710112622.7A CN106897030A (en) 2017-02-28 2017-02-28 A kind of data cached management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710112622.7A CN106897030A (en) 2017-02-28 2017-02-28 A kind of data cached management method and device

Publications (1)

Publication Number Publication Date
CN106897030A true CN106897030A (en) 2017-06-27

Family

ID=59184991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710112622.7A Pending CN106897030A (en) 2017-02-28 2017-02-28 A kind of data cached management method and device

Country Status (1)

Country Link
CN (1) CN106897030A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577618A (en) * 2017-09-13 2018-01-12 武大吉奥信息技术有限公司 A kind of balanced caching in three roads eliminates method and device
CN107704401A (en) * 2017-11-02 2018-02-16 郑州云海信息技术有限公司 Data cached method of replacing, system and storage system in a kind of storage system
CN107992434A (en) * 2017-11-24 2018-05-04 郑州云海信息技术有限公司 Lower brush method, apparatus and storage medium for distributed layer storage system
CN108153890A (en) * 2017-12-28 2018-06-12 泰康保险集团股份有限公司 Buffer memory management method and device
CN108173974A (en) * 2018-03-01 2018-06-15 南京邮电大学 A kind of HC Model inner buffer data based on distributed caching Memcached eliminate method
CN108197160A (en) * 2017-12-12 2018-06-22 腾讯科技(深圳)有限公司 A kind of picture loading method and device
CN108763110A (en) * 2018-03-22 2018-11-06 新华三技术有限公司 A kind of data cache method and device
CN109144431A (en) * 2018-09-30 2019-01-04 华中科技大学 Caching method, device, equipment and the storage medium of data block
CN109582233A (en) * 2018-11-21 2019-04-05 网宿科技股份有限公司 A kind of caching method and device of data
CN109783402A (en) * 2018-12-28 2019-05-21 深圳竹云科技有限公司 A kind of method of dynamic adjustment caching hot spot data
CN110119487A (en) * 2019-04-15 2019-08-13 华南理工大学 A kind of buffering updating method suitable for divergence data
CN110908612A (en) * 2019-11-27 2020-03-24 腾讯科技(深圳)有限公司 Cache management method, device, equipment and storage medium
CN110990300A (en) * 2019-12-20 2020-04-10 山东方寸微电子科技有限公司 Cache memory replacement method and system based on use heat
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN111309650A (en) * 2020-02-11 2020-06-19 广州市百果园信息技术有限公司 Cache control method, device, storage medium and equipment
US10691596B2 (en) 2018-04-27 2020-06-23 International Business Machines Corporation Integration of the frequency of usage of tracks in a tiered storage system into a cache management system of a storage controller
CN111367833A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Data caching method and device, computer equipment and readable storage medium
CN111506524A (en) * 2019-01-31 2020-08-07 华为技术有限公司 Method and device for eliminating and preloading data pages in database
CN118213045A (en) * 2024-02-07 2024-06-18 深圳市慧医合创科技有限公司 Image data storage method, system, medium and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060018170A1 (en) * 2004-07-26 2006-01-26 Integrated Device Technology, Inc. Interleaving memory blocks to relieve timing bottleneck in a multi-queue first-in first-out memory system
CN104503923A (en) * 2014-11-21 2015-04-08 华中科技大学 Asymmetrical disk array caching dispatching method
CN104571954A (en) * 2014-12-26 2015-04-29 杭州华为数字技术有限公司 Method and device for storing data
US9201804B1 (en) * 2012-02-06 2015-12-01 Google Inc. Dynamically adapting the configuration of a multi-queue cache based on access patterns

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060018170A1 (en) * 2004-07-26 2006-01-26 Integrated Device Technology, Inc. Interleaving memory blocks to relieve timing bottleneck in a multi-queue first-in first-out memory system
US9201804B1 (en) * 2012-02-06 2015-12-01 Google Inc. Dynamically adapting the configuration of a multi-queue cache based on access patterns
CN104503923A (en) * 2014-11-21 2015-04-08 华中科技大学 Asymmetrical disk array caching dispatching method
CN104571954A (en) * 2014-12-26 2015-04-29 杭州华为数字技术有限公司 Method and device for storing data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FLYCHAO88: "缓存淘汰算法", 《ITEYE博客》 *
Q-WHAI: "操作系统:基于页面置换算法的缓存原理详解(下)", 《CSDN博客》 *
YUANYUAN ZHOU,JAMES F.PHILBIN: "The Multi-Queue Replacement Algorithm for Second Level Buffer Caches", 《 PROCEEDINGS OF THE 2001 USENIX TECHNICAL CONFERENCE》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577618A (en) * 2017-09-13 2018-01-12 武大吉奥信息技术有限公司 A kind of balanced caching in three roads eliminates method and device
CN107704401A (en) * 2017-11-02 2018-02-16 郑州云海信息技术有限公司 Data cached method of replacing, system and storage system in a kind of storage system
CN107992434A (en) * 2017-11-24 2018-05-04 郑州云海信息技术有限公司 Lower brush method, apparatus and storage medium for distributed layer storage system
CN108197160A (en) * 2017-12-12 2018-06-22 腾讯科技(深圳)有限公司 A kind of picture loading method and device
CN108197160B (en) * 2017-12-12 2022-11-25 腾讯科技(深圳)有限公司 Picture loading method and device
CN108153890A (en) * 2017-12-28 2018-06-12 泰康保险集团股份有限公司 Buffer memory management method and device
CN108173974B (en) * 2018-03-01 2021-02-12 南京邮电大学 HCModel internal cache data elimination method based on distributed cache Memcached
CN108173974A (en) * 2018-03-01 2018-06-15 南京邮电大学 A kind of HC Model inner buffer data based on distributed caching Memcached eliminate method
CN108763110A (en) * 2018-03-22 2018-11-06 新华三技术有限公司 A kind of data cache method and device
US10691596B2 (en) 2018-04-27 2020-06-23 International Business Machines Corporation Integration of the frequency of usage of tracks in a tiered storage system into a cache management system of a storage controller
CN109144431B (en) * 2018-09-30 2021-11-02 华中科技大学 Data block caching method, device, equipment and storage medium
CN109144431A (en) * 2018-09-30 2019-01-04 华中科技大学 Caching method, device, equipment and the storage medium of data block
CN109582233A (en) * 2018-11-21 2019-04-05 网宿科技股份有限公司 A kind of caching method and device of data
CN109783402A (en) * 2018-12-28 2019-05-21 深圳竹云科技有限公司 A kind of method of dynamic adjustment caching hot spot data
CN111506524B (en) * 2019-01-31 2024-01-30 华为云计算技术有限公司 Method and device for eliminating and preloading data pages in database
CN111506524A (en) * 2019-01-31 2020-08-07 华为技术有限公司 Method and device for eliminating and preloading data pages in database
CN110119487A (en) * 2019-04-15 2019-08-13 华南理工大学 A kind of buffering updating method suitable for divergence data
CN110119487B (en) * 2019-04-15 2021-07-16 华南理工大学 Cache updating method suitable for divergent data
CN110908612A (en) * 2019-11-27 2020-03-24 腾讯科技(深圳)有限公司 Cache management method, device, equipment and storage medium
CN110908612B (en) * 2019-11-27 2022-02-22 腾讯科技(深圳)有限公司 Cache management method, device, equipment and storage medium
CN110990300B (en) * 2019-12-20 2021-12-14 山东方寸微电子科技有限公司 Cache memory replacement method and system based on use heat
CN110990300A (en) * 2019-12-20 2020-04-10 山东方寸微电子科技有限公司 Cache memory replacement method and system based on use heat
CN111159066A (en) * 2020-01-07 2020-05-15 杭州电子科技大学 Dynamically-adjusted cache data management and elimination method
CN111309650A (en) * 2020-02-11 2020-06-19 广州市百果园信息技术有限公司 Cache control method, device, storage medium and equipment
CN111309650B (en) * 2020-02-11 2024-01-05 广州市百果园信息技术有限公司 Cache control method, device, storage medium and equipment
CN111367833A (en) * 2020-03-31 2020-07-03 中国建设银行股份有限公司 Data caching method and device, computer equipment and readable storage medium
CN118213045A (en) * 2024-02-07 2024-06-18 深圳市慧医合创科技有限公司 Image data storage method, system, medium and computer equipment

Similar Documents

Publication Publication Date Title
CN106897030A (en) A kind of data cached management method and device
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
US8843691B2 (en) Prioritized erasure of data blocks in a flash storage device
CN103150136B (en) Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache
CN105389135B (en) A kind of solid-state disk inner buffer management method
US7818505B2 (en) Method and apparatus for managing a cache memory in a mass-storage system
CN101645043B (en) Methods for reading and writing data and memory device
CN101673188A (en) Data access method for solid state disk
CN103049394A (en) Method and system for data caching of solid state disk
CN110888600B (en) Buffer area management method for NAND flash memory
CN102981963A (en) Implementation method for flash translation layer of solid-state disc
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
CN103257935A (en) Cache management method and application thereof
CN108762671A (en) Hybrid memory system based on PCM and DRAM and management method thereof
CN107423229B (en) Buffer area improvement method for page-level FTL
CN113672166B (en) Data processing method, device, electronic equipment and storage medium
EP2765522B1 (en) Method and device for data pre-heating
CN107832013A (en) A kind of method for managing solid-state hard disc mapping table
CN111580754B (en) Write-friendly flash memory solid-state disk cache management method
CN106201348A (en) The buffer memory management method of non-volatile memory device and device
CN110968269A (en) SCM and SSD-based key value storage system and read-write request processing method
CN100428193C (en) Data preacquring method for use in data storage system
CN107590084A (en) A kind of page level buffering area improved method based on classification policy
CN105302493A (en) Swap-in and swap-out control method and system for SSD cache in mixed storage array
US10275363B2 (en) Cuckoo caching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170627

RJ01 Rejection of invention patent application after publication